The Art of the Concept: Costume Design in the Age of GenAI
The key to becoming a successful concept artist is to digest as much art as you can, learn from your peers… — Nadia Stefyn
Generative AI — the use of deep learning to generate content — has proven to be a fertile space for concept artists and designers. The term concept art was used as early as the 1930s by Walt Disney Animation Studios. Concept artists generate visual designs for things that do not yet exist. This includes, but is not limited to the use of GenAI concept art in film/TV, animation, video game production and, more recently, fashion and costume design. Being a concept artist takes commitment, vision, and a clear understanding of the role. Below are a few examples of the use of AI-generated concept art in real-life creative work.
Recently, the Council of Fashion Designers of America honored Erykah Badu with it’s Fashion Icon award. The award event is known as the “Oscars of American fashion.” At the event Thom Browne, the chairman of the CFDA, noted that “fashion, like democracy, is about choices and individual decisions that have the power to create culture and communities and define the future we want, we need and we deserve.” One of the things I noticed was Badu’s use of generative AI concept art to create her look for the CFDA event. She reached out to an Italian AI concept artist via Parallel.
The best thing about Badu’s IG reel is seeing the process unfold: from AI-generated concept art/design to the making of wearable clothing and accessories. Other examples of generative AI in the fashion and/or costume design space is work by Iris Van Herpen, a fashion designer who uses GenAI to imagine new worlds such as architecture existing in harmony with marine ecosystems. Van Herpen’s team created a training model — a dataset that’s used to train a machine learning algorithm — using her fashion archive (design DNA) and the concepts came from this process.
The creative process we had together was exceptionally inspiring, we could dream up all these references, to cathedrals we love, the deep sea life we have seen, and even my archive that we trained the AI with. So by teaching the AI my design DNA and the more historic architectures references, it got better ‘dreams’. — Iris Van Herpen
Another related project I had a more direct connection to is the upcoming Ryan Coogler film Sinners. Earlier this year I was approached by costume designer Ruth Carter (who won two Oscars for Black Panther) to using GenAI to create several concepts for costumes. While it’s still too early to divulge most of the work I can reflect on what it was like working with a costume designer who was already exploring GenAI and using existing tools to generate concepts. Ruth and her team had done a lot of research prior to the use of GenAI such as collecting 1930s photography of the Deep South and fashion illustrations in mainstream magazines (archived).
I used MidJourney’s /describe
command to upload images and generate possible prompts based on those images. Ruth went to my IG account to identify images that resonated with her. I went into my MJ archive to collect the prompts that were used to generate the images. I also uploaded several of my pencil sketches and existing photos to create looks for Ruth to consider. This process was iterative and included several Zoom meetings, mobile phone conversations and texts. It was exciting to see some of those AI-generated concepts or ideas come to life on screen. Here’s a brief synopsis of the process:
- Decide on a clear concept. For Sinners the key concepts came from the film script that I was required to read before doing any GenAI work. The designer (Ruth) also had ideas that I had to keep in mind when working.
- Gather reference materials. As mentioned previously Ruth and her team had done a lot of research prior to my involvement. I added my own research, as well, in the form of thumbnails.**
- Create thumbnails. This is where tools like MidJourney surpass what humans can do in one sitting. I was able to generate and share dozens of thumbnails (usually via text) to the designer who gave me instant feedback that often sent me back to the ‘drawing board.’
- Add finishing touches. Once Ruth decided that she liked a thumbnail I would upscale it or make additional changes using the Vary Region editor in MidJourney that allows me to select and regenerate specific parts of an upscaled image.
**Note: Prompt engineering skills are a must when generating concepts for real projects. Prompt engineering is the process of designing inputs or prompts to guide AI models to produce desired outputs. Prompt engineers use creativity and trial and error to craft prompts that provide context, instructions, and examples to help the AI understand intent and respond in a meaningful way. In previous articles I address this aspect in creative GenAI work, especially in the “anatomy” of a prompt.