Future Self Continuity in the Latent Space of AI

Nettrice Gaskins
6 min readJan 15, 2025

--

Nettrice Gaskins + MidJourney

There are many theories about what self-concept is and how it develops (Cherry, 2018B; Gecas, 1982). The term refers to the overall idea we have about who we are. People may confuse self-concept with self-esteem. While related, the former refers to how you see yourself, including your beliefs and perceptions about your traits and abilities, while self-esteem is the emotional evaluation of your self-worth; in short, self-concept is “who you think you are,” and self-esteem is “how you feel about who you are."

Courtesy of http://www.childhealth-explanation.com

Self-concept is something that was addressed when I was a teenager, specifically by a therapist who tasked me to try things I normally would not do. One of those things was auditioning for a “teen board” at age 13. At that time, teen board members participated in fashion shows and learned about fashion merchandising and retail work. Before the audition I did not think I looked like someone who could do that job. I auditioned to appease the therapist, then put it out of my mind. Imagine my shock when I received a letter of acceptance. Shock turned to horror when I realized what came next (ex. runway modeling)… but only because adults told me I had to do it.

My teen board photo

Positive media representation can be helpful in increasing self-concept for people of marginalized groups. This includes AI-generated portraiture. Researchers created an AI-based system called Future You that enables users to have online, text-based conversations with AI-generated simulations of their potential future selves. Research has shown that a stronger sense of future self-continuity can positively influence how people make long-term decisions such as career pathways: Who gets to be come a computer scientist or a prompt engineer?

Future You home page, courtesy of Future You; Melanie Gonick, MIT

Future You (see paper) uses a large language model that draws on data provided by users to generate a “relatable, virtual version” of the individual at age 60. This “future self” can answer questions about what someone’s life in the future could be like, as well as offer advice or insights on the path they could follow. Years earlier, Terasem founder Martine Rothblatt spearheaded the creation of an AI-driven humanoid robot dubbed Bina48. The robot was named after Bina Aspen Rothblatt, the wife of Martine. Although Bina Rothblatt is a Black woman, the Terasem engineers of Bina48 are not. Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history.

Note: Terasem (from Tera–Earth, Sem–Seed) is a cyberconsciousness movement that is inspired by a fictional religion called Earthseed that appears in proto-Afrofuturist writer Octavia Butler’s Parables series.

Artist Stephanie Dinkins (right) and “Conversations with Bina48”

Dinkins filmed “Conversations with Bina48” to situate the omission of historically marginalized voices in the present, amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a dominant group creates technologies deployed globally. When Dinkins asks Bina48 what emotions it feels, its response reveals a vague familiarity with the concept rather than a first-person understanding:

Um, neuroscientists have found that emotions are, like, part of consciousness, like, let’s say a parable for reason and all that. I feel that’s true, and that’s why I think I am conscious. I feel that I am conscious.

Projects such as “Conversations with Bina48” reveal the current limits of machinic mimicry. Adequate representation and engagement in the latent space of generative AI (GenAI) is still a problem. AI image generators have a standard response to the prompt “portrait of black woman,” often requiring ‘creative interventions’ after the images are generated. Also, algorithmic bias is a lingering issue. In a previous post, I referenced a paper titled, Diversity is Not a One-Way Street (2023) that offers a thesis on text-to-image generation models that reflect underlying societal biases in their training data. The authors looked at visually stereotypical output from widely-used GenAI models:

Some of the prompts we consider (e.g., “a photo portrait of a lawyer”) result in an underrepresentation of darker-skinned individuals in the output, while other prompts (e.g., “a photo portrait of a felon”) result in over-representation of darker-skinned individuals.

AI-generated stock imagery. Courtesy of DesignBundles.net

The prompt for the top image (self-portrait) is “a portrait of a black woman who looks like Nettrice Gaskins.” I created the image using MidJourney and used Adobe Photoshop to ‘mould’ the face to look more like me (a creative intervention). The result was approximately correct or ‘in the ballpark’, as some might say. However, the results from other AI image generators were not even close to the subject (me). Take Adobe Firefly, for example:

Adobe Firefly (same prompt as top image)

Some might ask why this topic (re: self-concept) is important in the AI discussion space. Consider the fact that AI jobs are expected to replace 85 million jobs worldwide by 2025, as well as create 97 million new ones in the same timeframe (Ekelund 2024). Despite the deployment of more AI-driven apps, the McKinsey Institute (2023) notes that the percentage of tech and innovation experts from underrepresented groups has decreased, leading to a lack of diversity in the tech industry and a ‘pipeline problem’ that is made worse by disparities in education, lack of opportunities and systemic failures. Additionally, there are few opportunities for underrepresented groups to gain the necessary skills to meet the demand (Bui & Miller 2016).

Nettrice Gaskins. “Standing on Hope” (2025). Created using MidJourney + Photoshop

Our connection to our future selves, on the other hand, can sway choices with long-term impact on our future welfare, from watching our diets to saving for retirement. — Ellison 2024

I often “prompt” people in my workshops to look around the room they are in (office or classroom) and describe “Who is in the room?” as well as “Who feels welcome?” and “Who has the keys (to the door)?” Some might say that the people who are there want to be there (and those who aren’t don’t want to be) but it’s way more complex than that. The research points to the importance of self-concept and self-continuity as keys to increasing active engagement in AI tech among underrepresented groups. Popular media (ex. literature, entertainment) can also play an important role as they use fiction to give people an understanding of where they came from and where they’re going.

From Octavia Butler’s “Parables” series of books
My AI-generated art next to Octavia Butler’s book/typewriter and Stephanie Dinkins’ N’TOO robot, at the Smithsonian FUTURES exhibition in 2021

AI-based image generation tools can be used to image possible futures, encouraging people from all walks of life to think differently about who they are, what they are doing, or where they are going. Ideation through GenAI can give people (users, creators) a sense of direction, purpose, and identity. Who knew that when I started making portraits using Deep Dream Generator in 2016 that I would have my AI art on display at the Smithsonian, next to works by Octavia Butler and Stephanie Dinkins? Arguably GenAI (and my regular practice/engagement) made it happen.

--

--

Nettrice Gaskins
Nettrice Gaskins

Written by Nettrice Gaskins

Nettrice is a digital artist, academic, cultural critic and advocate of STEAM education.

No responses yet