Dream Variations: Langston Hughes, Dreamtime & DeepDream

Nettrice Gaskins
5 min readSep 6, 2024

--

Image inspired by “Dream Variations”. Courtesy of the author and created using Midjourney v6.1

I’ve been thinking a lot about dreaming, which is defined as a series of events or images that happen in your mind when you are at rest. I’m a lifelong daydreamer and I have lucid dreams. Langston Hughes’s “Harlem (Dream Deferred)” and “Dream Variations” are two of my favorite poems.

To fling my arms wide
In the face of the sun,
Dance! Whirl! Whirl!
Till the quick day is done.
Rest at pale evening . . .
A tall, slim tree . . .
Night coming tenderly
Black like me.

Image courtesy of screenrant.com

Dreaming was a hot word on the cyber-culture streets in the early 2010s thanks to Inception (2010). The year the film was released I enrolled in “Culture & Cognition”, a graduate-level course that combined cognitive studies and science and technology studies or STS, an interdisciplinary field that examines the issues and consequences of science and technology in their historical, cultural, and social contexts. The professor assigned us a chapter from Bradd Shore’s Culture in Mind: Cognition, Culture, and the Problem of Meaning. The chapter on Dreamtime Learning is arguably the hardest to understand because you have to put aside what you know (or think you know) in order to discover how a non-Western culture learns.

Ilgari Inyayimanha (Shared Sky) , a collaborative painting by Australian Aboriginal artists from Yamaji Art.

According to Tyson Yunkaporta, Dreamtime or Dreaming, which is the “continuous action of creation in the present as well as the past, a dynamic interaction between the physical and spiritual worlds.” For my class presentation I showed part of a 2002 film titled Rabbit Proof Fence that is based on the true story of Molly Craig, Daisy Kadibill and Gracie Fields who, after being forcibly removed from their mothers in 1931, escaped from an Australian mission settlement in order to find their way home. As film director Phillip Noyce tells the story, he cuts between two very different worlds: Molly’s world, and “chief protector” A.O Neville’s world.

I learned that even though we share the same planet our ways of knowing are often very different; and no one way is the right or best way.

Lynette Wallworth’s Collisions (2015) tells the story of Aboriginal elder Nyarri Nyarri Morgan who lived as 1000 generations before him in the remote Pilbara desert of Western Australia– until his life was dramatically impacted by a collision with the extreme edge of Western science and technology. For this project, Wallworth chose to explore the storytelling potential of VR, and saw this form as the perfect vehicle for Nyarri to communicate his story.

It has been mentioned that Elders are worried that youth are too distracted with technology and other worldly things that Dreamtime stories will not be passed on and will inevitably be forgotten. I think the exhibit displayed a traditional story in a very contemporary and relatable way that was able to capture and grasp everyday people without losing the essence and cultural values within the Seven Sister’s Songlines exhibition. — Amy Flannery, NAISDA art curator

In Wallworth’s project the collision of worlds was life-changing. Another type of collision occurred in 2015, which was the year that Google released DeepDream, a computer vision program that uses a neural network — AI that teaches computers to try to think like humans. Neural networks are used to find and enhance image patterns via ‘algorithmic pareidolia’ (ex. when someone sees an animal in the clouds), to generate dream-like, psychedelic imagery. A year later when I taught AP Computer Science Principles in a high school for the visual and performing arts I found DeepDream and introduced it to my students.

A “creepy” DeepDream image. Courtesy of Hitesh Jhamtani

DeepDream was created by Google engineer Alexander Mordvintsev in 2014. It has a public facing instance at Deep Dream Generator, which currently lets people to use text-to-image and image-to-image prompts to generate images. Initially, I could explore the original DeepDream, another tool titled ‘Thin Style’ and my favorite tool ‘Deep Style’ (the latter is still around if you search for it). The technical term for Deep Style is neural style transfer or NST, which refers to AI-powered software algorithms that manipulate images in order to adopt the appearance or visual styles of other images. Here is what happens when I apply NTS to Midjourney:

Midjourney-generated portrait (from a personal source photo)
Deep style is applied using Deep Dream Generator

The difference between the two is both obvious and subtle. In other words, I use DDG/Deep Style to remix my Midjourney images. My aim was to enhance certain areas of the initial Midjourney-generated image using Deep Style/NTS. As a result, I get unique, almost alien colors and textures. I’ve been using Deep Style for 8 years, so I’ve collected dozens of style reference images that include fabric and tapestry patterns (my favorites). Dreaming through AI-powered technology often requires the collision of tools and worlds. It requires daily practice, exploration, and experimentation. It’s a different way of working than in the past, especially in the area of digital art.

--

--

Nettrice Gaskins
Nettrice Gaskins

Written by Nettrice Gaskins

Nettrice is a digital artist, academic, cultural critic and advocate of STEAM education.

No responses yet