How we treat animals will inform our future with robots

[This Meidum OneZero interview with author Kate Darling explores several interesting implications of medium-as-social-actor presence evoked by human-like robots, including insights from the history of our views about interacting with animals. –Matthew]

How We Treat Animals Will Inform Our Future With Robots

Evan Selinger in conversation with Kate Darling from MIT Media Lab

By Evan Selinger
April 1, 2021

[This is Open Dialogue, an interview series from OneZero about technology and ethics.]

A few years ago, I read a fascinating paper by Kate Darling, a research specialist at the MIT Media Lab, that left a lasting impression. In “Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects.” Kate clarifies how easy it is, given the way the human mind works, for us to become emotionally attached to all kinds of robots — robots that have humanlike, animal-like, or even basic lifelike features. Given this tendency, she proposed a radical idea: granting robots some legal protections. Kate’s core insight is that as humanlike robots become more advanced and more deeply integrated into society, we should be wary of people becoming accustomed to mistreating them.

Simply put, if society doesn’t care about people verbally or physically abusing humanlike robots, we might be in for a rude awakening. Permitting people to do vile things to robots that resemble us might lead us to develop bad habits — habits that incline us to do the same awful things to other humans. From this perspective, protecting robots might be essential for protecting ourselves. Kate’s explanation that robo-abuse might pose a severe problem for us thus runs counter to the dominant philosophical discussion of robot rights — conversations that focus on whether robots could ever deserve rights to protect their interests and possibly even dignity.

Our conversation has been edited and condensed for clarity.

Evan: I’m thrilled to talk with you and explore some of the ideas in your new book, The New Breed: What Our History With Animals Reveals About Our Future With Robots. To be honest, I’m jealous. My daughter is excited to read it, and she hasn’t read my last book!

Kate: Sorry, but yours warns about a possible dystopian future, and mine is about cute animals.

Evan: We’re living in a strange time. The pandemic is affecting everything, and isolation has become a substantial problem. Innovative solutions are needed, and apparently, robots make people feel less lonely in places like nursing homes. What’s going on? And do you think this is quality care?

Kate: The thing that’s always fascinated me about robots is people anthropomorphize them, become emotionally attached to them, and even develop social relationships with them. We do these things because we’re social creatures. We create social connections with all kinds of entities. We anthropomorphize animals, even pet rocks. But robots are a really interesting case because they’re special objects. Robots can sense and think, make autonomous decisions, learn, and move around in ways that trick our brains into projecting agency onto them.

Some people are concerned about current applications of social robots in educational and health care contexts where these devices interact with vulnerable populations. The main quality of care concern I hear is that these robots can’t effectively replace humans. That’s absolutely correct. Robots don’t offer anything remotely close to the type of care, teaching, or emotional connection humans can provide. Still, they offer something valuable that’s different.

One reason why I like to compare robots to animals is that robots are an effective replacement for animal therapy in contexts where animals can’t be used, and the goal is really just to soothe and give someone a sense of nurturing. When we anthropomorphize things that make us feel less isolated and lonely, it’s not an adequate replacement for people. But it’s also better than nothing.

Evan: Like so many conversations about technology, the one about how humanlike robots should appear and act has tensions and conflicting ideas. Sometimes, people worry about robots looking too robotic. Take Digit, a mobile robot discussed in the One Zero article “The Delicate Art of Making Robots That Don’t Creep People Out.” Digit can navigate staircases as well as pick up, schlep, and stack boxes. From a technical perspective, creating a robot that can successfully perform these tasks doesn’t require giving it a head. And so, Digit’s creators, Agility Robotics, didn’t give it one. It turns out this was a bad idea. People find headless robots unsettling! Form, not just function, are both important.

At the same time, people criticize bots for being too humanlike. Digital assistants like Apple’s Siri and Amazon’s Alexa that sound like humans are called out for reinforcing sexism. Privacy scholars have a related worry — namely that one way to weaponize robots is to design them with humanlike features. Humanlike robots can trick us into letting down our guard by taking advantage of “our deeply ingrained responses to human body language.” Our mutual friend Woody Hartzog offers a great example of a potential robo-scam.

Woody’s family has a Roomba vacuum cleaning robot they named Rocco. It’s possible, Woody muses, that in the future, they buy a more humanlike version from a different company, one that has “a cute face, voice, and personality.” If Woody’s family becomes attached to the new Rocco, he’s not sure how great his resolve will be if, at some point, it “starts to sputter along as though sick…and says ‘Daddy…[cough]…if you don’t buy me a new software upgrade…I’ll die.’”

What’s the best way to think about this problem?

Kate: This is all about design. I think these design questions are especially fascinating right now because robots are gradually coming into workplaces, households, and even shared spaces like grocery stores. The grocery chain Stop & Shop uses Marty, a six-foot-tall robot with giant googly eyes and a huge base that looks a little bit like a penis. Marty looks for spills and hazards on the floor and alerts staff over the intercom when it sees them. Originally, Marty was also designed to be able to scan the shelves and do inventory because, obviously, the robot is too expensive to really be worth it when it only detects hazards. But even after about two years, such functionality hasn’t been rolled out. And now, people hate Marty. They say the robot is always in the way of their shopping carts. And they’re creeped out by Marty’s eyes — eyes that look like they’re continually observing everything.

My colleague Daniella DiPaola did a sentiment analysis on social media. She searched for people tweeting about Marty and found negative comments spiked when the grocery store celebrated the robot’s birthday, giving customers cake and balloons. Marty is the real-life version of Clippy, Microsoft’s design-fail digital Office assistant. As you know, Clippy was an animated googly-eye character that didn’t interact in the social ways users expected. Both Marty and Clippy are examples of bots where the designers should have thought more about human-robot interaction psychology. Specifically, how making robots with humanlike features can backfire in contexts where they perform poorly as social agents.

One of my pet peeves is people arguing that we need humanoid robots for everything because we live in a world built for humans. It’s a total fallacy. It’s often unnecessary, and, in many cases, you just can’t design humanlike robots that people find very compelling. We’re better off turning to animation for inspiration. Pixar perfected the art of taking an object and imbuing it with emotion and intent without needing to make it look like a person.

As to Woody’s concern, it’s urgent. We should find a way to address it through consumer protection law before designers figure out how to make compelling robots. The long history of persuasive design — from building shopping malls to casinos and mobile phones — suggests manipulative industries will want to use social robots to exploit people.

I guess the only good news is research indicates when social robots are well-designed, trust can break easily if a robot screws over humans, even in situations like cheating at a game.

Evan: Let me ask one more thing about manipulation. Video of dancing robots created by Boston Dynamics created quite the stir. The moves are technically impressive. But that’s not why the images caught on. It’s fun to watch silly robots, just like it’s fun to see animals and kids doing cute things. Should we be concerned about companies winning hearts and minds with fun and funny robot behavior? After all, their long-term goal can be to create an atmosphere of normalization that weakens the general public’s concern about the dangerous things robots can do.

Kate: You’re absolutely right. Our fascination with the Boston Dynamics videos goes way beyond marveling at the technical accomplishments. And, yes, I do think companies could use cute robot videos as a means of shifting public perception of robots. But in the case of Boston Dynamics, I think that they’re mostly just having fun with it. I don’t think they truly understand how powerfully we relate to these machines and the extent to which we treat them differently than other devices. Honestly, I think they just like getting all the YouTube hits.

Evan: Again, I’m really excited your book will be published soon. If the past is prologue, what does our history of creating animal rights and delegating agency to animals, like the police assigning tasks to dogs, teach us about the future we should strive for with robots?

Kate: Well, the book examines the robot-animal comparison in a few ways. For example, I discuss how our partnerships with animals draw strength from their different yet supplemental intelligence and skills. And yet, with robots, we’re constantly trying to automate tasks that replace what people do. Maybe there’s a better way to think about automation — partnering with robots that aren’t like us to improve our job performance.

I also explain why understanding our history with animals sheds light on the issue that vexes legal scholars of how to allocate responsibility for harm when robots do something unexpected. Using animal behavior as a baseline shows it’s just as absurd to assign responsibility to a runaway robot as it is to a tiger that a zookeeper negligently let escape from its cage. This is not a new problem: If we look back to ancient clay tablets, we find laws that specify what happens when your ox wanders off in the street and causes harm.

Additionally, I use the robot-animal comparison to help us make better sense of debates over whether humans should become emotionally attached to robots. When dogs started becoming part of the American family, some psychologists worried that people becoming too involved with their pets would be unhealthy and detract from their capacity to have warm human-to-human relationships. Maybe people would become so enamored with pets that they wouldn’t have babies anymore. Obviously, they were wrong. We’re very social creatures. Developing new relationships doesn’t have to diminish old ones.

Evan: That’s fascinating. I didn’t know there was a moral panic over caring for pets. By today’s standards, the concern is absurd. And yet, when I show my students videos of people in Japan expressing deep emotional anguish over the mortality of their Aibo pet robot dogs and mourning their deaths with funerals, many express discomfort. They don’t have a clear standard for when we become too emotionally attached to robots. And they’re not quick to condemn others for having different sensibilities. But these videos leave them unsettled.

Kate: I often get that reaction when I talk with people about robot pets. In the book, I try to show why we’ll have better conversations if we shift the discussion away from people saying, “Oh no, this is inherently wrong,” to more specific analyses of actual problems — problems like the one Woody mentions. For example, it might be useful to ask questions like, “Under current market conditions, can a device like Aibo take advantage of my elderly uncle?”

Evan: Since your work shows how easy it is for robots to evoke feelings of empathy in us, I want to shift the conversation away from robots as possible threats to robots as a possible tool for making us better people. Some ethicists propose we should consider allowing robots to nudge humans to be empathetic to one another. For example, robot babysitters or robot teachers could function as empathy engines by encouraging kids to be kinder. Conversations about topics like this one can sometimes have a futuristic vibe. But they’re essential to have now so we can guide innovation for the public good. As a robot scholar and parent of a child who you lovingly refer to on Twitter as Babybot, what do you think?

Kate: I’ve got a mixed response. We already see some amazing use cases in education. In particular, some research on how robots can work with children who have autism suggests that for kids, robots can be social agents that don’t come with any of our human baggage. When a robot tells them something or does something with them, it’s not the same thing as if a therapist, parent, or other child behaves similarly. And it’s more interactive than a stuffed animal or character on a screen. Robots really are a unique tool.

But there are also products on the market that make me nervous, like robots that are pitched as companions for children with autism. Again, a comparison with animals is apt. The history of animal therapy suggests some varieties can be quite effective when integrated into a holistic treatment plan. But others offered in underregulated market exchanges, like dolphin therapy, exploit parents by giving them false hope, and may even be harmful for the kids.

Evan: How about we end this conversation by revisiting your position that initially got me fascinated with social robots? Back in 2011, you argued we might want to legally protect robots to avoid creating a society that sanctions robo-abuse and, in the process, inadvertently allows conditioning to occur that expands how cruel humans are to one another.

It’s hard to know how seriously to take the concern about robo-abuse fueling human-on-human abuse for a straightforward reason. It’s fundamentally an empirical issue. Longstanding debates over whether playing violent video games leads people to more readily commit violent acts in the real world seem to suggest most people do a good job compartmentalizing. The majority of people realize norms appropriate for fictional worlds, like the conduct needed to win first-person shooters, are inappropriate in everyday life and shouldn’t be carried over. How comparable do you think the robot case might be? An important difference is that unlike digital characters in video games, physically embodied robots interact with us off-screen.

Kate: Thanks for asking this question! You’re acknowledging that my views might have changed. Not everyone does this. Sometimes, people act like if you said something once, you’re committed to always believing it.

You’re right, it’s an empirical question. And we just don’t have the empirical answers yet. In the final part of my book, I explore the issue by looking at the history of animal rights in Western society — a history that’s filled with hypocrisy. In the 1800s, animal rights activists realized that while the public believed that some animals deserve our empathy, they weren’t going to successfully convince society to legally protect them. And so, they started using the argument that if people are allowed to be cruel to animals, they’re going to be cruel to people, too. Mind you, they didn’t have a shred of evidence to validate this claim. But lack of evidence didn’t matter. The accusation caught on, thanks to prejudices like classicism. People alleged that the lower classes beat their animals and that the rest of society needs to be protected from their urges. This approach to animal rights persisted for a long time and has seemed intuitively correct to many people.

Arguments that violent behaviors exhibited in video games or pornography will spill over to other parts of life have a similar history. The evidence is inconclusive. But for many, they still feel like red flags. Based on this and other parts of our history with animals, I predict that it’s likely people will feel uncomfortable with violence being done to some lifelike robots, just like we’re uncomfortable with some animals being abused but perfectly fine eating hamburgers and chicken wings and pass laws to prevent those scenarios.

Evan: Are you speculating that arbitrary triggers that lead us to see some robots as cute and others like Marty as annoying will be the basis for robot protection laws?

Kate: Yes, exactly. It’s the right way to look at the entire Western history of animal consideration, including our Greenpeace-manufactured fascination with saving the whales because they can sing, and it’s the most plausible way to look at our future with social robots. I’m not saying it’s the best path. In fact, my hope is that these robot conversations will shine some light on our tendencies and perhaps nudge us toward thinking more deeply about what we want our relationships with nonhumans to look like.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z