BBL Drizzy, The D.O.C. & TextFX: Generative AI in Hip Hop

Nettrice Gaskins
4 min read2 days ago

--

Metro Boomin + Midjourney + Deep Dream Generator

“BBL Drizzy” is a ‘diss track beat’ by Black music producer Metro Boomin. The track was released on May 5, 2024 dissing Drake in response to the Drake–Kendrick Lamar feud which consisted of multiple diss tracks from both sides. “BBL Drizzy” made hip-hop history by accidentally sampling an AI-generated song written by @kingwillonius who I previously wrote about in American Fiction, AI & Willonious’ Film Reparations. Rap Research Lab posted about the sampled Willonius AI-generated track on Instagram today.

King Willonius used Udio, a generative AI music tool that was founded last year by former Google DeepMind researchers to make it “easy for anyone to create emotionally resonant music in an instant,” according to the company. The Recording Industry Association of America or RIAA — the trade group on behalf of major music labels — filed copyright infringement cases against AI companies Uncharted Labs, the developer behind Udio, for training their AI models with the labels’ unlicensed sound recordings.

While technological advances in digital media and the internet would seem to bring a decentralized (even democratized) structure that diverts the costly music distribution system allowing for more artists and labels to compete, the RIAA has acted to prevent these technologies from developing their greatest potential. — David Arditi

Record labels and artists have conflicting interests when it comes to the production of music. When artists sign record contracts, they sign away the rights to their music (Hull 2004); this takes away all rights of the artist to have free speech. In other words, the RIAA makes it seem like it’s there to protect artists’ rights but in reality it seeks to maintain control (power) over its current and potential assets. One thing the RIAA can’t do is give a legendary rapper his voice back.

News directly from The D.O.C.: new music using AI

In last year’s news, hip hop pioneer Fab 5 Freddy connected legendary 1980s rapper The D.O.C. with the CEO of the AI company Sunno that is “teaching [AI] what D.O.C. used to sound like.” In 1989, The D.O.C. was in a car accident that nearly destroyed his face and vocal chords. The accident prevented him from continuing with his rap career… until now when Sunno AI has the ability to match his new vocals with his voice from previous recordings. Note: the RIAA is also suing Sunno. Udio recently released a statement that includes this:

The goal of model training is to develop an understanding of musical ideas — the basic building blocks of musical expression that are owned by no one. Our system is explicitly designed to create music reflecting new musical ideas. We are completely uninterested in reproducing content in our training set, and in fact, have implemented and continue to refine state-of-the-art filters to ensure our model does not reproduce copyrighted works or artists’ voices.

Cognitive ARTifacts: my model for the coming GenAI era

While I was a PhD student at Georgia Tech I wrote an essay titled, “Cognitive ARTifacts: Examining sociocultural and cognitive dimensions in STS practices through culturally situated, multimodal interactions that construct meaning.” Here’s an excerpt:

Current interest in the design of computer interfaces has forced
consideration of the role of real tasks and environments, and therefore of groups of cooperating individuals, or artifacts, and of culture (Norman 1991). Norman (2009) sets the stage by putting technological innovations into two groups: conceptual breakthroughs that have a huge impact on society and continual improvements that merely improve on an existing technology.

What we are seeing today are conceptual and practical breakthroughs in technological innovation. We are also seeing how these breakthroughs threaten the status quo. The ‘anti-AI brigade’ would rather eliminate something good or of value in order to get rid of something they don’t like or want, without considering the far-reaching implications on creative expression and technological innovation, playing right into the hands of the RIAA and power-hungry corporations seeking to control artistic output and artists’ livelihoods.

Rapper Lupe Fiasco and Google’s TextFX. Courtesy of Google Research Labs

On the lyricist side of things, TextFX allows users to experiment with the PaLM API and build applications that leverage Google’s state of the art large language models. These tools are designed to expand the writing process by generating creative language and enhancing language skills. In the following video, Lupe demonstrates how it works:

According to Lupe, TextFX “won’t write raps for you. Instead, these tools are designed to empower your writing, provide creative possibilities and help you see text in new ways. Like with any tool, you still need to bring your own creativity and skillset to them.” The same can be said for most generative AI tools and I make this point in my interview with the Electronic Frontier Foundation for their “How to Fix the Internet” podcast:

All of this to say: Don’t throw the babies out with the bathwater.

--

--

Nettrice Gaskins

Nettrice is a digital artist, academic, cultural critic and advocate of STEAM education.