Digital vs. GenAI: From Deluxe-Paint to Generative AI
Deluxe-Paint was a digital paint program aka ‘bitmap graphics editor’ created by Dan Silva for Electronic Arts and published for the Amiga 1000 in the fall of 1985. It was the first software I used to create computer-based or digital art. Deluxe-Paint would be around for ten more years, even after I had moved on to other programs (i.e., Adobe Photoshop) and computers as a college student at Pratt Institute in Brooklyn, NY and the School of the Art Institute of Chicago.
At the time (and often today) the ones who used traditional media such as graphite, paint, and clay were more accepted by the art establishment, while those of us drawing with pixels were pushed into graphic design. Nothing I was doing in the early 1990s would have been considered except for maybe 2D and 3D animation and I didn’t want to be boxed in. So I bided my time until vector graphics took over and images could be made at higher resolutions. Then, the World Wide Web proliferated and everything became DATA. Today I argue that what I created using computers (back then) was more Dadaist than digital. It was a digital form of photomontage.
As a student I knew more about Romare Bearden than Hannah Höch, but I was thinking of my role as a Black woman (not just a woman or Black person) in the computer graphics/digital art space. When I looked, I didn’t see anyone like me in that space. Höch, who often felt alienated, used the controversial style of Dadaism to undermine the status quo and today I’m using generative artificial intelligence, a descendant of the 20th century movement. According to some, AI art is undermining a status quo in the art and entertainment worlds. Just as some determined that Dada art wasn’t real art there are people today saying the same thing about AI art.
I guess this determination to cast out AI art (as not art) is undermined by the arts establishment that is gradually embracing it (see Refik Anadol).
The exhibition forces us to consider the collaboration between human and machine, but also brings up interesting questions about different ways of seeing and knowing. What does it mean to see data like this? How does putting numbers or electrical impulses into these visualizations change their meaning?
The main issue I have with the more widely accepted AI-generated artworks is that they’re too abstract and my interest is in centering Black and brown people in the images… literally. For example, I chose the following image for an upcoming group show. There are never just a few references in these images; there are sometimes hundreds or thousands.
I came up with a title for my recent GenAI works: “Afro-Generative Tableaux Variations.” I think the way that the subjects are framed and staged within my AI-generated images changes the purpose and intention behind the work. I think about how (and why) Hannah Höch inserted herself in her photomontages. These works answer the digital vs. GenAI debate because the latter are built on the proliferation of digital imagery (as data) on the Web. The result is the product of 500+ dimensional latent space, a space where machine learning does some of the work, as Photostat machines once did for the Dadaists (and Bearden).