Art Intersections: From Visual Arts & Computer Graphics to A.I.
When my high school visual art teacher Mrs. Sidebottom first suggested I take her Computer Graphics (CG) class I declined. I was a Visual Arts major and CG sounded too much like the computer programming my mother did. I was eventually persuaded to take the class and in just one semester I had created enough computer-generated artwork to assemble a second portfolio, next to my Visual Arts one. This made me aware of the separation between art created on computers vs. art that is made using analog materials. I didn’t want to choose one side over the other, so in college I majored in CG and “minored” in traditional art (and art education).
The rapid change in information processing technologies from the 1990s to the 2000s led to many developments in computer-generated imagery or CGI. In the 1990s Artificial intelligence systems performed simple tasks like choosing cards and playing games. The nascent-stage CG images I created in the late 1980s/early 1990s look similar to the pixelated images from Generative Adversarial Networks or GANs in 2014. Developments in CGI and AI began to merge, and soon AI systems were able to generate images that were hard to differentiate from photographs. But I didn’t want photographs. I wanted to be able to use AI to make visual art.
However, there is still a separation between what people refer to as traditional or fine arts and CGI/AI. I’m sure many would argue that these areas never need to come together, esp. artists who prefer traditional methods and materials. I was once like them but Mrs. Sidebottom’s class set me on a path to embrace the uncertainty that comes with merging visual art and emerging technologies. For five years I used AI generators such as Midjourney and Deep Dream Generator to re-imagine, re-cast, remix and re-style images such as my early sketch of the Brooklyn Bridge.
My analog drawings and paintings can be uploaded into Midjourney, which has a describe feature that transforms the images into words or prompts. From the four prompts I chose one that generated four thumbnails. Next, I select one thumbnail image to upscale with a new set of options. I can also chosse to Make variations on that image (see above), Upscale to max before I download it, or choose a Light upscale redo if it’s not quite what I want, among other options. This process is not meant to copy or repeat what I did to produce the charcoal sketch under the Brooklyn Bridge. It’s a different, more complex method (with many steps) and it’s not as easy as clicking a button. Take, for example, the AI translation of my sketch of shoes.
Midjourney not only gave me a still life charcoal sketch of shoes but it also captured the values I used, i.e. the wave of darks and the lights. My professor would set up a still life with dramatic lighting. He showed us how to shade dark areas on the blank paper with black charcoal before working in the other values. It was less about shoes and more about understanding forms and value. The early study was a guide for the AI. I like both images.
I like using AI generators such as Midjourney to interpret, or visually explanation favorite texts, concepts, or processes. I often create a digital collage in Photoshop first, before using the AI tool. For “Arneatha” I created a digital collage and uploaded the image in Midjourney. I used the collage, describe and permutation prompts to create a final image (see above), which comes from my memory of “The Soul Brothers and Sister Lou,” by Kristen Hunter. In the book the main character Louretta (Lou) competes for attention with her attractive older sister Arneatha who is portrayed as lazy and selfish. The style, personality/representation, and visual elements of the character were determined by me, the artist.