Googly eyes on robots: Evolution, social cues and medium-as-social-actor presence

[The role of evolution in how we perceive and respond to subtle social cues, and the implications for the design of technologies that evoke medium-as-social-actor presence, are explored in this story from Engadget. See the original for four different images. –Matthew]

[Image: Source: MyRecordJournal]

Why putting googly eyes on robots makes them inherently less threatening

We can exploit our social nature to be nicer to AIs.

Andrew Tarantola
May 22, 2019

At the start of 2019, supermarket chain Giant Food Stores announced it would begin operating customer-assisting robots — collectively dubbed Marty — in 172 East Coast locations. These autonomous machines may navigate their respective store using a laser-based detection system, but they’re also outfitted with a pair of oversize googly eyes. This is to, “[make] it a bit more fun,” Giant President Nick Bertram told Adweek in January, and “celebrate the fact that there’s a robot.”

“As we approach the completion of the rollout, we continue to be pleased by the addition of Marty in our stores,” a Giant Food rep told Engadget via email. “Our associates are appreciative of the assistance Marty provides them, freeing them up to do other tasks and interact more with customers. Speaking of our customers, they, too, are big fans of Marty, with kids and adults alike looking for Marty in store and taking selfies.”

But Marty’s googly eyes don’t just give customers something to chuckle at as they pass one another in the cereal aisle. Research shows that slapping peepers on inanimate objects puts the humans around them at ease and encourages them to be more generous and pro-social (as opposed to anti-social) than they normally would.

“People pay attention to the presence of eyes,” Dr. Amrisha Vaish, assistant professor at the University of Virginia’s department of psychology, told Engadget. “Humans are very sensitive to the presence of other people, and we behave more socially in the presence of other people.” It’s called the “watching-eye paradigm” and exploits the deep-seated human trait of needing to be valued within society: managing our reputations and being seen by those around us as team players.

“In the course of our evolution, it’s been really important for us to be [able] to cooperate with others,” Vaish points out. Interpersonal cooperation has proved “so important to the evolution of the human species that we’ve become really sensitive to even sort of minimal cues of eyes,” she continued.

Dr. Pawan Sinha, professor of vision and computational neuroscience at MIT, concurs. “If one were to find an ecological reason why we are so attuned to see faces it’s because the ability to detect faces is crucial for our social well being and, when we are young, it’s crucial to our survival to be able to detect a specific human and be able to orient towards them,” he told Engadget. “It’s very important for us to be able to live our lives as social beings.”

Vaish’s own research in this field, specifically the 2018 study Eyes, More Than Other Facial Features, Enhance Real-World Donation Behavior, bears out this effect. Vaish and her team alternated photographs of a chair, a nose, a mouth and a pair of human eyes above the donation jar at a local children’s museum over 28 weeks. The weeks during which the eyes were displayed saw an average total donation of $27 — around $12 more than when the other images were shown.

“What we found is that the eyes — compared to the chairs — did, in fact, increase people’s donations,” Vaish said. “The numbers weren’t huge but there was a statistically significant increase.”

This effect extends beyond actions like donations, the watching-eye paradigm can also reduce antisocial behavior like littering and bike theft. It also affects people of all ages. “As young as 5 years of age, children become sensitive to being watched.” Vaish said. “When a peer is watching them, they show more prosocial behavior and less antisocial, less stealing behavior.”

The effect does not last forever, however. Vaish notes that in her previous research position, she found that putting a picture of eyes near the communal supply of milk drastically reduced the rate at which people would help themselves to it. At least to start.

“Initially, they’re very striking when you put them up, and then you sort of start to monitor your behavior more,” she said. However, over time, people became accustomed to the presence of these watching eyes before eventually sliding back into complacency with regard to their milk intake.

This intra-office phenomenon illustrates an unusual aspect of humanity’s evolution: We can see faces (and assign agency) to almost anything. “People look for certain specific cues, physical features or behaviors, to determine whether something is alive,” Dr. Erin Horowitz, lecturer at the University of California at Santa Barbara’s department of psychological and brain science, told Engadget. “So something that appears to move on its own, people tend to process it as alive.”

This is an ancient prey response in humans, instilled over countless generations before we arrived at the top of the food chain. It’s better to see the leopard that isn’t there, Horowitz said, than to not see the leopard that is. As such, even highly abstracted and stylized depictions of eyes can trigger this response. “You could have two dots next to each other, and those would be considered eyes,” Horowitz said, “if there’s, say, a line underneath that looks like a mouth.”

And it’s “not just identifying predators,” she said, “but also identifying potential people that we can cooperate and interact with.” These hardwired evolutionary responses and visceral need for social bonds have led to the development of the “theory of mind.”

“You can think of it as a broad term for research on human capacity for social engagement,” Dr. Tamsin German, a professor at the University of California at Santa Barbara’s department of psychological and brain science, told Engadget. “Specifically, those ones that talk about the concepts we have of people’s internal, hidden mental states.

“People believe things, people want things, people hope things. And those internal states predict the behaviors that they will engage in,” said. By piecing together a person’s behavior and explaining it in terms of those hidden states, one can glimpse at the motivations and underlying beliefs of that person. “it’s a critically important skill for humans being such a social species,” German said.

As it turns out, slapping googly eyes on a roving robotic monolith like Marty can elicit the same response from humans even when we know the object is not actually alive. But there are limits to this effect, and surprisingly, the Uncanny Valley exists for robot eyes as well.

German notes that a wide variety of prey species have evolved agency-granting responses similar to humans’, “there is lots of work suggesting that they have a sensitivity to two eyes looking directly at them.” But in rhesus macaques, how those eyes are presented makes all the difference.

She points to a recent study from Princeton University, which placed various photographs and stylized, abstracted depictions of macaque faces in front of real macaques. “The rhesus macaque will look a lot at highly stylized, cartoony pictures of other faces of rhesus monkeys. And they look a lot at actual photographs of rhesus monkeys, But if you have very, very close, but not quite, images, they don’t like them at all.” As with humans, being almost there — but not quite — is interpreted as a negative signal.

A similar effect can be seen when humans observe the movements of robots, androids and other people. German references a recent multinational study in which subjects were shown static images of clearly mechanical robots, natural humans and androids like you’d see at the Disney World Hall of Presidents.

“You put them in an fMRI scanner and just essentially allow the brain to acclimate to what it’s seeing,” she said. Then, “using a technique called predictive coding, you look at which parts of the brain are excited when [the object in the image] starts moving. Essentially, asking the brain to tell you what it didn’t expect.”

When the robot starts clunking around in a very mechanical fashion and the human moves smoothly, researchers noted only slight electrical responses from the brain. “But the Android looks like a human and moves like a robot in various areas of the brain, kind of a more active compared to a baseline, suggesting that the brain didn’t see what it predicts,” German explained.

This is what elicits the uneasiness and trepidation in people when they interact with a machine in the Uncanny Valley. It’s 200,000 years of evolution going, “Hey, stupid, this thing is moving when it shouldn’t be (at least not moving or looking like it should be). You need to scram before you get eaten.”

But adding eyes, even the googly variety, appears to help mitigate this effect by exploiting our social nature to artificially instill a sense of agency toward these inanimate objects.

“It’s just a signal that this is an animate thing, it’s going to have mental states and — provided you’re not trying to you make it look so realistic, where the movement that it engages in looks wrong — you’re not going to get an Uncanny valley effect,” German concluded.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z