Why are we happy to watch movies with AI and robots, but feel disturbed by near-identical humanoid robots in real-life? Welcome to the Uncanny Valley.
Considering the robot theme of my last two posts, I was somewhat pleased last week to have picked up a radio show from the BBC in their series ‘The Why Factor’ called “Fear of Robots” in which they make some of the same points concerning our assumptions that robots will always be benign.
The presenter found himself somewhat disquieted by a robotic seal pup, and completely disturbed by an almost-human android.
He had, so the saying goes, entered the uncanny valley. Although we humans react (and sometimes over-react) very positively to human-like features – cartoon characters, dolls and the like – we have a generally very bad response to simulations which are very, very nearly, but not completely, life-identical.
The Uncanny Valley
Despite the extraordinary advances in CGI, many filmgoers find greater satisfaction and easier suspension of disbelief in watching old-style animation, than movies which seek to recreate the real world.
The characters just don’t move right, or look right, or something. The difference is so slight and subtle, yet rings huge alarm bells in our heads.
One contributor to the radio show described very-near-human robots as giving us the same heebie-jeebies as walking corpses might. After all, they are cold, their skin tone is wrong, they don’t move naturally. Of course they freak us out.
Away from the uncanny valley, though, we love the broader approximations to human behaviour. As we turn away in discomfort from the close-to-real, we delight in the more grotesque caricature.
It seems we’re more comfortable with the messy, chaotic, imperfect real-world, than a more sterile near-perfection. Perhaps that speaks to a deep aspect of human nature, something that we software developers might do well to pay heed to.
There are clear cases of this emotional reaction to human-like behaviour in the use of software, especially at work.
The response that many, if not all of us, had to that [expletive deleted] animated paper clip when it popped up and said, “I see you’re trying to write a letter, would you like some help with that?” was no different to the reaction we’d have to the co-worker who would keep dropping by to say, “You don’t want to do it like that. Do you?”.
Approximating the real world, including human behaviour, when developing the software that we need to interact with, is thus a complex matter.
Get it right and the user experience is one of delight and sustained engagement. But go too far and users are actively put-off by the feeling that the software itself is somehow working against us.
At GEP we’ve been working on user experience technology that puts the human at the heart of process. We are, of course, some way from software that has a human personality. And although the possibilities are immense, they are not without risk.
Imagine sitting down at your desk each day to find that overnight everything has been rearranged to make it slightly more convenient for you. Perhaps so you don’t have to reach so far for the telephone, or your chair is aligned more ergonomically to the monitor.
Such things could dramatically improve our day…or screw it up entirely, leaving us feeling irritated or even violated. As creatures of habit we naturally reach for the place where the telephone is, which is not always ideal. It just is.
A Real-Life, Virtual Assistant
But there is another, more subtle, set of possibilities that we might permit to assist us without, to be frank, freaking us out.
You might imagine an assistant who begins by learning how you work, where the shortcuts are that you naturally take, and how other might be offered to speed things along. Then when the time is right, you assistant might suggest you have some choices, all in good time, no rush. The assistant makes notes of how they can improve your life and recommends rather than enforces changes.
In time you might start noticing that there is less clutter around and you’re completing tasks faster without having been trained, directed or instructed. User consent to small changes that help keep things tidy could be far more effective than wholesale re-ordering of menus and icons.
It’s something we have to keep in mind when developing software that should be designed to help you work. There is a fine but definite line between being helpful and just downright irritating.
It reminds me of the wonderful scene in Father Ted where a sales assistant tries to tempt Mrs. Doyle with an automatic tea-maker. “It will take the misery out of making tea.” Her response? “Maybe I like the misery!”