Generative AI Art: 2016 to the Present

Nettrice Gaskins
5 min readNov 24, 2023

--

Deep (neural) Image Style Transfer via Deep Dream Generator (circa 2019)

Via BoingBoing (Cory Doctorow):

Wagner James Au sez, “AI algorithms, as AOC recently pointed out, often have a racial bias inherited by their creators, to the point where some can’t even ‘see’ people of color. Afrofuturist Nettrice Gaskins teaches Deep Dream’s AI to be aware of great black faces on a deep level.”

Wagner James Au’s blog post was aggregated and commented on in 2019. However, I had been exploring what would soon be called “generative AI art” for at least two years prior to Au’s and Doctorow’s posts, using Deep Dream Generator. On BoingBoing Aaron_Hertzmann wrote:

It sure looks like they were generated with Neural Style Transfer, not Deep Dreams. If it were Deep Dreams, there’d been hundreds of mutant doggies, eyeballs, and gazebos everywhere (or just very typical curved lines).

This was the first time I had heard “neural style transfer” and, more specifically, the “Gatys gram-matrix method.” Of course, I looked up Gatys and learned about Leon Gatys, a machine learning researcher with a focus on human health and well-being. He also wrote the paper “A Neural Algorithm of Artistic Style [If you’re up for a lot of technical or academic jargon be sure check it out.]”

Deep style + Processing programming language (image style)

In 2019 people were mostly curious, not freaking out about AI art and I was committed to using Deep Dream Generator to produce and post at least one image per day for 365 days. I was experimenting and learning about the possibilities: some of the image styles I was uploading came from using a Leap motion controller with Processing (code) to generate arrays from the movements of my fingers (see above). I wanted to do things no one else was doing and it was getting noticed. Aaron_Hertzmann, again:

What I really like about your results is that they don’t just look like the usual automatic output, they’re better and they show the artist’s hand in making them fully realized, even though the elements of the algorithms are still visible.

The idea of the “artist’s hand (in the algorithm)” reminded me of Malcolm McCullough’s book “Abstracting Craft” that covers the nature of hand-eye coordination, image culture, aspects of tool usage, issues in human-computer interaction, geometric constructions and abstract methods in design, and the necessity of improvisation.

Early Midjourney + “Song of Solomon”

I didn’t use a prompt until nearly three years later. Before Midjourney everything was created using neural style transfer and other methods such as generative adversarial networks or GANs. My interactions with DDG was a collaboration with a machine. The DDG machine told me what images worked better as styles and three years in I had dozens of image styles that I could pick and choose from depending on the project. Text-to-image AI was different because in the early days of Midjourney (2021) I was only able to use text. I interpreted my favorite stories such as Toni Morrison’s Song of Solomon (see above) and “Karintha” by Jean Toomer:

Her skin is like dusk on the eastern horizon,
O can’t you see it,
O can’t you see it,
Her skin is like dusk on the eastern horizon
. . . When the sun goes down.

The challenge was how to interpret these words in a way that produced images that captured my feelings when reading the stories. Cutting and pasting from the authors’ texts would not have worked well. I had to craft my own “call” statements for the machine to “respond” to, then I can iterate on these prompts to get the best outputs. This process is like “call-and-response participation” between me (the human) and the machine.

“Karintha” + Midjourney

When you look at the output from Midjourney (see above) you are seeing my interpretation of Jean Toomer’s “Karintha.” I had gone deep into the generative AI rabbit hole to master the process and, as future developments come, I’m better able to further process image output. For example, I can use Midjourney to generate an image, then change different sections or regions of it based on what I want the image to convey.

Pan-African flag + Midjourney

Although the prompt for the image above includes 1960s “civil rights” I wanted the woman depicted in the image to have a red afro, really playing up the red, black and green colors in the Pan-African flag, so I used the “Vary (Region)” feature to do that.

Also known as the UNIA flag, the Afro-American flag, and the Black Liberation flag, the distinct red, black, and green Pan-African flag was created in 1920. Sometimes also called the Marcus Garvey flag, it was meant to serve as a marker of freedom, pride, and the political power of Black Americans. — Sydney Clark

Using “Vary (Region)” to add the red afro

And then I remixed the entire thing to get a new, different result:

A remix of the previous image

Although the Midjourney text-to-image process is different from neural image style transfer, it’s still a collaboration and, like Aaron_Hertzmann noted, the artist’s hand is still present in the output or it can be. For me it’s about the possibilities and the opportunity to combine styles (and processes). I often upload Midjourney images for use in Deep Dream Generator to get different results.

Midjourney + Deep Dream Generator (deep style)

As a final step, I combine the Midjourney and DDG output using Adobe Photoshop. I use the eraser brush in Photoshop to reveal one image from underneath the other. Once again, the artist’s hand makes the generative AI artworks more fully realized. This where we are in 2023: not simply pasting in text or clicking “generate” buttons but using tools and our imaginations to explore new ways of artmaking.

--

--

Nettrice Gaskins
Nettrice Gaskins

Written by Nettrice Gaskins

Nettrice is a digital artist, academic, cultural critic and advocate of STEAM education.