The Algorhythmic Toolbox: Riffing on Bias Using Generative AI

Nettrice Gaskins
5 min readJan 24, 2025

--

Portrait of Billie Holiday, Downbeat, New York, N.Y., ca. Feb. 1947. Photo by William P. Gottlieb.

In the past I’ve drawn analogies between AI art and sampling or remixing in hip-hop culture. I also pointed to techno-vernacular creativity or TVC, a term I coined to capture the creative STEAM engagements of historically marginalized, underrepresented, underestimated or undervalued groups. In my TVC book I mention the riff:

To “riff” means to improvise on a subject by extending a singular idea or inspiration into a practice or habit. In jazz, blues, and other musical genres, riffs are short rhythmic, melodic, or harmonic figures that are repeated to establish the framework of a song.

The term riff entered musical slang in the 1920s and was used primarily in discussion of forms of rock music, heavy metal or jazz. One explanation holds that musicians use riff as a near-synonym for musical idea. Another definition of riff is to experiment with a thing or idea, making changes that create a new and novel version of it. Billie Holiday was a famous jazz singer, so riffing is relatable. One of the first AI-generated portraits I created was one of Holliday using Deep Dream Generator, specifically Deep Style, which is another term for neural style transfer or NST.

Deep Dream Generator’s first tools

Prior to 2022, we (creators) mostly used neural style transfer or NST, an optimization technique used to take two images — a content image and a style reference image — and blend them together so the output image looks like the content image, but “painted” in the style of the style reference image. There was no such thing as ‘prompts’ in 2017 when I used William P. Gottlieb’s photo* of Billie Holiday as the content image with African wax print fabric as the style reference image. Deep Dream Generator still refers to NST as ‘Deep Style’ and it’s still accessible on the platform if you know where to look.

*Note: In accordance with the wishes of the photographer, the Holliday portrait and others in his collection entered into the public domain in 2010.

Nettrice Gaskins, “Lady Day,” 2017. Created using Deep Dream Generator (Deep Style).

I used DDG and Deep Style for several years and my output improved with daily use. More recently, I combined Deep Style with text-2-image (or vice versa). I’ve been riffing on my 2017 Deep Style Billie Holiday portrait using MidJourney (text-2-image), while exploring my desire for peace, to counter the state of things (disasters, elections) out of my control.

Repose is a state in which you are resting and feeling calm. She had a still, almost blank face in repose. Its atmosphere is one of repose rather than excitement.

Riffing on Gottlieb’s photo of Billie Holiday using MidJourney

My favorite images from these MidJourney sessions are posted on social media, as per usual. However, at some point this week, I decided to return to Deep Style in Deep Dream Generator to apply different image style references to the MidJourney output. Next, I composited two (or more) images in Adobe Photoshop and used Generative Fill (in Photoshop) to create the finished work. Here is the result of using Deep Style:

Using Deep Dream Generator (Deep Style/NST)

For the finished top image that is a composite of output from MidJourney and Deep Dream Generator, I used Generative Fill in Photoshop. Generative Fill is basically inpainting, a technique of filling in missing regions of images that involves filling in the missing or damaged parts of an image, or removing the undesired object to construct a complete image. In this case, I wanted to change the aspect ration from rectangular to square-ish and allow the machine to riff on the original image. The entire experience feels more like a collaboration with AI, rather than merely typing in text and clicking a button.

Nettrice Gaskins, “Fantastical Friday,” 2025. Created using MidJourney, Deep Dream Generator & Photoshop.

So how does this process counter algorithmic bias?

This week I spoke at the “Teaching the Arts in the Age of A.I.: A Symposium” at the University of Maryland. We kicked off the event with a panel addressing pedagogical (education) issues and ended the event with a panel about algorithmic bias. Algorithmic bias occurs when systematic errors in machine learning algorithms produce unfair or discriminatory outcomes. I began my brief presentation with the Shirley Card (see below), a tool that was used by photo labs to calibrate skin tones, shadows and light during the printing process.

The Shirley cards were used all over the world, wherever the Kodak printers were used. “It didn’t make any difference which model came in later to do it,” he says. “It was still called ‘The Shirley.’ — Mandalit del Barco (NPR)

My intro slide for the bias panel presentation

Initially, these (Shirley Card) references were created using the faces of white women and, as a result, people with darker skin were often poorly rendered and underexposed. This went on for years until furniture and candy companies complained that the wood and chocolate products were too dark in photos. This is when things changed but we also heard this,

“At the time, in the ’50s, the people who were buying cameras were mostly Caucasian people,” [Lorna Roth] says. “And so I guess they didn’t see the need for the market to expand to a broader range of skin tones.” — Mandalit del Barco

Considering the lack of representation in STEM/STEAM, this latter quote explains a lot as it relates to perpetuating algorithmic bias:

The second slide for my bias panel presentation

During my talk, I highlighted John Akomfrah’s “My Body is the Monument” that makes a reference to the Shirley card but replaces the standard colors with more diverse skin tones. Akomfrah’s work is a riff on the original Shirley Card. Using generative AI, creators can riff on biased output by using neural (AI) tools, image compositing, and inpainting. We can use non-tech skills such as visual language and visual storytelling to counter algorithmic bias. A lot of bad things end up baked into the neural network training models AI generators use to create content. These things cannot be fixed without starting from scratch. However, as I’ve said before, there are things we can do through prompt engineering and by using different GenAI tools. Of course, this process is laborious but so is… making ART.

--

--

Nettrice Gaskins
Nettrice Gaskins

Written by Nettrice Gaskins

Nettrice is a digital artist, academic, cultural critic and advocate of STEAM education.

No responses yet