On the Uncanny Valley in Video Games

by Jennifer Rezny
on 14 November 2015
Hits: 2721
Assassin's Creed: Syndicate, Ubisoft, 2015

Are video games in their awkward phase –– realistic, but not realistic enough?

 

The new Assassin’s Creed game dropped a few weeks ago, the latest in an ever-growing chain of sci-fi/history games about ideological warfare and human nature. Sixty hours in, I’ve finally unlocked a scene where a little romance is blossoming, and the characters lean in to kiss. The camera is angled in such a way that I only see the back of one character’s head, and I know they are kissing because they’ve leant together silently, but also because I have seen this angle more and more often in video games in the past few years. 

Video games with romances scenes all work around the kiss differently. Sometimes, if it is still from a side where one can see where lips would touch, a character has a hand on the other’s cheek to obscure where their mouths connect. Other times, just an instant before lips finally brush, the camera moves around and hides the characters behind a piece of the setting. This could perhaps be chalked up to ratings in a number of games, but in games like Dragon Age: Inquisition or Mass Effect 3, it is immediately preceeded by sex scenes, sometimes complete with full-frontal nudity. 

This is the kind of angle you don’t see in film, unless the director is deliberately trying to hide a body double, which video games don’t have to worry about. The reason for this is that computer-generated graphics still struggle with being convincing. Assassin’s Creed is a game that tends towards realism rather than any sort of stylized art, and as many game studios push harder and harder towards photorealism, the pitfalls become deeper and deeper. When I watch characters kiss in older video games, I see little digital figurines pressing close, the limitations of the media inconsequential on character models that are only approximations of people to begin with. Now, when I see characters kiss in video games, I’m fixated on all the ways it doesn’t work: photorealistic lips that phase through each other, movements that lack minute muscles twitching under the skin, slightly-vacant looking eyes that are meant to be soft and romantic. These almost-people are made to look like actual human beings, and looking at a distance or too-quickly, they might pass off that way, but in a long slow kiss, I’m disturbed by the way the way the light touches their skin and makes it look dead, even though the artists have taken great care to render each pore and blemish.

But when the camera is angled away instead, hiding their faces behind the back of one character’s head, I don’t feel quite so unnerved.

The sensation I speak of is called the uncanny valley, and the sentiment behind it is readily gaining more and more traction in the realm of computer-generated graphics and robotics. It is, in the simplest terms, the feeling of revulsion we get when seeing something near-identical to a human that is simply not human. The closer science and art come to replicating convincing humans, the more people become repelled by their efforts: at some tipping point, a clever rendition of a person reads as almost corpselike. These renditions are read as people, but they’re people with something wrong with them.

A large part of this, beyond the simple technological boundaries in rendering the translucency of skin or how to make two intangible models touch, lies in microexpressions. Microexpressions are unconscious movements in the face that are almost impossible to replicate by choice; they rely on incredibly minute movements of facial muscles, electrodermal activity in the skin that changes its resistance to movement, and a variety of other factors that lie in the subconscious. Technology cannot (perhaps yet) replicate these, and as a result, otherwise perfectly convincing rendered faces can fall into the uncanny valley when put in motion: compared to the human face, the digital face is stiff to the point of being practically frozen.

Naturally, this visual does not end up being terribly compelling in what could otherwise be a completely engrossing love scene. And in this lies the ultimate question: if the end goal of photorealism in computer-generated graphics is to perfectly replicate human beings, will the “digital actor” then spread from video games to film?

Already, digital actors are used for action sequences and editing procedures to do what real humans could not, or to replace practical effects that would otherwise be prohibitively expensive. But we have yet to see films made entirely with photorealistic digital graphics that haven’t earned reputations almost entirely for delving into the uncanny valley — previous attempts such as Square Pictures’ Final Fantasy: The Spirits Within or Castle Rock Entertainment’s Polar Express have become notorious for making audiences uncomfortable. And considering the amount of computer power required to render photorealism, these endeavors often become far more expensive than what a live action movie might cost. Video games are scarcely as expensive as full-length films, animated or live-action, but as graphics improve, we might see more attempts at photorealistic animated films –– games like The Last of Us indicate we are getting closer all the time. 

All that remains, it seems, is how long it’ll take to finally close that gap, and have scenes of human tenderness that don’t unnerve us quite so much.