Recently, computer graphics expert Theodore Kim highlighted the differences in white and black skin tones for computer algorithms, claiming that the CGI industry is overly reliant on identifying and depicting white skin tones and fails to properly render darker tones. The former effect is subsurface scattering and the latter is specular reflection. I wrote about this research more in-depth in a previous post.
Prior to Kim’s SIGGRAPH talk, I was creating and using custom image styles with source photos in Deep Dream Generator. The term specular reflection or specularity made sense to me. The effect in my images also seemed to resonate with other people because I saw an uptick in social media followers whenever I posted certain images. I also received more requests for AI art commissions, including for the 2023 novel, “Nigeria Jones.”
I’ve referred to my approach to creating AI art as a kind of exquisite corpse, but instead of words and paper, I’m using image-2-image (Deep Dream image style transfer) and now text-2-image AI techniques. The latter is done using mainly Midjourney but also DALL-E 2 for outpainting. The art for “Nigeria Jones” was based on the character description from the author:
Nigeria is a Black teen girl with long locs. She has been homeschooled and living apart from regular society — so her clothing is not trendy or of the moment.
The publisher also requested that the character be seated because “the young adult market” is flooded with portraits from the chest-up.
My challenge was telling a story about the main character in a single image based on input from the author and publisher. We couldn’t find one stock image that met the criteria, so I combined elements of three stock photos to create a base or source image for use with Deep Dream Generator. I also used my custom image style to apply the specular reflection effect. I composited the results using Photoshop and added Nigerian (Ankara) textile patterns to the tank top.
The book cover art came before Midjourney, a NLP program that uses text prompts instead of images to generate art. However, I could use what I learned from the earlier process to create new images in Midjourney. In the example above, I started with a text prompt and the result looked very little like Harriet Tubman. Then, I remastered the image (middle). The remaster feature makes new variations of original images. It was closer but not close enough, so I used Photoshop to manipulate it.
I wish I had access to a program like Midjourney when I worked on “Nigeria Jones” but I’m happy with the end result and so is the author. These results seldom end with the pressing of a “Generate” or “Return” button. It requires user input and, in my case, a lot of layering and manipulation. The machine runs the algorithms but it doesn’t tell stories.