Brain scans show humans can empathize with obviously mechanical robots

[From the MIT Technology blog Mims’s Bits]

Friday, July 30, 2010

Brain Scans Teach Humans to Empathize with Bots

Mirror neurons light up when we’re put in their shoes.

When we watch a human express a powerful emotion – anger, fear, disgust – big sections of our brains light up, including so-called “mirror neurons,” which are unique because they fire both when we produce a given action and when we perceive it in others. They are the basis of what neuroscientists call Resonance.

Resonance describes the mechanism by which the neural substrates involved in the internal representation of actions, as well as emotions and sensations, are also recruited when perceiving another individual experiencing the same action, emotion or sensation.

In order to test whether the sections of the brain that are activated when a human sees a robot expressing powerful emotions are the same as when a human sees another human expressing them, an international group of researchers stuck volunteers into an fMRI machine – which can, with limited spatial and temporal resolution, determine which parts of your brain are active at any given time – and played them clips of humans and robots making identical facial expressions.

On a very basic level, the researchers were asking whether humans empathize with even obviously mechanical robots.

The results, published last week in the journal PLoS ONE, were about what you would expect: in a default scenario in which participants were told to concentrate on the motion of the facial gesture itself, their brains showed significantly reduced activation when they watched robots expressing emotion, as compared to humans doing the same thing.

But a funny thing happened when they were told to concentrate on the emotional content of the robots’ expressions: their brains evidenced significantly increased activity, including the areas that contain mirror neurons.

So when humans are asked to think about what a robot expressing an emotion might be feeling, we are instantly more likely to empathize with them. The very question – please concentrate on what the robot is feeling – presupposes that the robot even has emotions.

Whether or not the robot is actually feeling something is therefore up to us – it depends on our beliefs about the sentience or non-sentience of the robot. It’s not hard to convincingly simulate at least an animal level of emotion in robots with even the most primitive gestural vocabulary – that’s the basis of the success of robotic therapy as carried out with, for example, the robotic baby seal Paro.

[Here is] the very same video which participants were shown when they were in the fMRI scanner. The robot itself is barely recognizable as human, and its gestures even less so, which makes it all the more intriguing that participants were able to imagine, just for an instant, that it has feelings, too. What might an even more humanoid – or more familiar – robot accomplish?

Christopher Mims is a journalist who covers technology and science for just about everybody.

This entry was posted in Presence in the News. Bookmark the permalink. Both comments and trackbacks are currently closed.
  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z