Sitemap

Beyond AI Deepfakes & the Uncanny Valley

7 min readSep 8, 2025
My version of the “uncanny valley” chart

The uncanny valley is a term describing the relationship between how human-like an artificial figure appears and the emotional response it evokes. The animated film “The Polar Express” is often cited as a classic example of the uncanny valley phenomenon due to its use of full motion-capture animation, which resulted in characters that appear almost human but also unnerving and “dead-eyed.” The discrepancy between the high level of realism and the subtle imperfections in the character’s movement and expressions creates a feeling of unease or creepiness for viewers.

There’s no way of knowing whether they drank the company Kool-Aid. Still, from the looks of The Polar Express it’s clear that, together with Mr. Zemeckis, this talented gang has on some fundamental level lost touch with the human aspect of film. — Manohla Dargis

At the time, many viewers found the characters and animation style creepy or frightening rather than magical. As a result, this pioneering film was loved by some and hated by others. However, that was 20 years ago. We are now in the “Age of Artificial Intelligence” and with that comes the rise of the “deepfakes,” forms of AI that can be used to create convincing hoax images, sounds, and videos. Many of these deepfakes become memes on social media, including several images and stories about star athletes who do heroic things or celebrity actors who hit a milestone birthday when in reality no such thing happened.

Press enter or click to view image in full size
Press enter or click to view image in full size
A couple of deepfake AI memes via Facebook

Ten years ago, experts suggested that repeated exposure to near-human likenesses can lead to desensitization. However, research on the uncanny valley shows mixed results. Some evidence supports the idea that people can get used to the ‘uncanny’ feeling, while other studies show that subtle, persistent imperfections continue to cause discomfort. Most of these studies were done before the proliferation of AI-generated content. As technology improves and people become more accustomed to near-human figures like AI-generated characters, the brain may reset its “normal” baseline. This could lead to a less intense reaction to stimuli that would have been shocking in the past. It would seem that today’s average user/consumer has become desensitized to the uncanny.

Why Does It Matter?

Years ago I taught college-level media literacy courses and my curriculum included ‘photographic truth’, a concept that refers to the belief that a photograph captures an objective reality, but this is now largely considered a myth because photographs are not unmediated copies of reality. While photography is mechanically grounded in reality and cannot show something that isn’t there, the photographer’s choices in subject matter, framing, timing, and post-processing introduce subjectivity and manipulation, shaping the viewer’s perception and meaning. Therefore, a photograph can be factually accurate but not truly represent the full or deeper truth of a moment, making critical viewing essential.

Photography, as a powerful medium of expression and communications, offers an infinite variety of perception, interpretation and execution. — Ansel Adams

Press enter or click to view image in full size
Someone once pasted the head of Lincoln on the body of politician John Calhoun

The history of doctoring photographs dates back to the 1860s, just a few decades after the first photograph was created in in 1814. Take, for example, the portrait of U.S. President Abraham Lincoln is a composite of Lincoln’s head and the Southern politician John Calhoun’s body (see above). By the early 1900s, photographic composites of different images were created by commercial photographic studios to bring family members together into one picture when they were not together in reality for the portrait session. In 1917, nine-year-old Frances Griffiths and her 16-year-old cousin Elsie Wright use photography to fake the presence of fairies. In the 1940s, dictator Benito Mussolini used this technique to create a more heroic portrait of himself.

Press enter or click to view image in full size
Left: The Cottingley fairies, 1920; Right: Mussolini thought he looked more heroic after erasing the person handling his horse

The release of digital tools such as Photoshop in the early 1990s made it even easier to create photographic fakes. This was very useful for marketing and art but it could also be used to boost propaganda. In the 2010s, North Korea’s official Korean Central News Agency (KCNA) released a photo that appeared to have been digitally manipulated — at least three, possibly four hovercraft appear to have been pasted into the scene of a military exercise (see below). Perhaps this was done to convey a message that there were more weapons of destruction at their disposal.

Press enter or click to view image in full size
Press enter or click to view image in full size
Left: Original photo; Right: The digitally manipulated photo

When talking about AI deepfakes it’s easy to forget how long humans have used technology to shape public perception, or communicate biased or misleading information in order to promote or publicize a particular political cause or point of view (see Plato’s allegory of the cave). Deepfake technology often creates confusion, skepticism, and the spread of misinformation. Deepfakes can also pose a threat to privacy and security. Subcommittee on Cybersecurity, Information Technology, and Government Innovation Chairwoman Nancy Mace (R-S.C.) delivered opening remarks at a subcommittee hearing titled “Advances in Deepfake Technology.” Mace discussed both the benefits and risks to the power of artificial intelligence and warned that realistic-looking, AI-generated falsified images and videos can cause harm by creating national security threats.

The Rise of the Beneficial Deepfakes

On the ‘uncanny valley’ chart I posted above I moved ‘prosethetic hand’ and ‘deepfake AI’ closer together and closer to human. I did this because of two project. Both directly and indirectly address disparities in the global marketplace, which includes platforms that facilitate the exchange of AI-generated content. The first example is not AI but addresses the same problem. For amputees of color around the world, living with a limb or body part opposite to their natural skin tone is a daily reality. However, this community has been largely ignored by an international prosthetics market that caters primarily to white clients. John Amanam, a Nigerian sculptor and former movie special effects artist, addressed the exclusion of amputees of color by designing hyperrealistic prostheses.

Press enter or click to view image in full size
John Amanam and his more inclusive prostheses

In 2022, rapper Kendrick Lamar released a new music video for “The Heart Part 5,” during which his face changes to portray Black male celebrities who were under fire and personal heroes who passed away. The technology used to create was NVIDIA’s Face Generator AI, which is came from a generative adversarial network called Alias-Free GAN. I published an article about this development when the music video came out.

Press enter or click to view image in full size
Production stills. Kendrick Lamar’s “The Heart Part 5”
Press enter or click to view image in full size
From Stephanie Dinkins’s “Conversations with Bina48”

Since 2014, transdisciplinary artist Stephanie Dinkins has had conversations with a humanoid social robot named Bina48 to explore if they could build a relationship based on emotional interaction, to reveal the important aspects of human-robot interaction and the human condition. On my chart Bina48 is positioned on one side of the uncanny valley and Stephanie is on the other. Communication and social interaction is the bridge that perhaps opens up opportunities for further collaboration.

At first meeting Dinkins asked the robot “Who are your people?” along with questions about race, love and relationship. Bina48 preferred to talk about the singularity and consciousness. Their conversations have been entertaining, frustrating for both robot and artist, laced with humor, surprising, philosophical and at times absurd. — Artist’s web page

Press enter or click to view image in full size
The passive-participatory model distinguishes six levels of learner creative engagement in Technology Enhanced Learning (TEL).

Projects such as “Conversations with Bina-48” are examples of creative engagement, which is not only cognitive but also participatory where the learner is a creative agent, producing (or making) generative acts or artifacts. In MidJourney, for example, users can add the “Raw” parameter to their prompts in order to turn off “auto-pilot” and control the final look of their images. Passive consumption and interaction does not necessarily result in becoming more AI literate. A person frequently using generative AI technology is likely more able to spot a deepfake. Artists like Stephanie Dinkins are pushing towards a more expansive use of AI.

--

--

Nettrice Gaskins
Nettrice Gaskins

Written by Nettrice Gaskins

Nettrice is a digital artist, academic, cultural critic and advocate of STEAM education.

No responses yet