[The interview below from Forbes (where it features more images) is a very thoughtful discussion of social presence – primarily in response to independent robots (‘medium as social actor’ presence) but also to teleoperated ones that act as extensions of users. Note that the term presence appears once near the end. Fascinating, important stuff! –Matthew]
Relationships with Robots: Good or Bad for Humans?
Patrick Lin
Feb 1, 2016
As robotics and autonomous systems flourish, human-robot relationships are becoming increasingly important. They’re how we interact and control the technology, from self-driving cars to sex robots.
The way technologies are designed can solve or create new problems. For example, by making robots look like humans or cute animals, we may develop emotional affinity toward the machines. This could help promote trust with users—but perhaps also overtrust? Could we become co-dependent and be overattached to robots, causing a problem when they’re not around?
To help explain the issues, here’s an interview with Dr. Julie Carpenter who just published a book on the subject. She’s a leading expert on human-robot social interaction, with a PhD in Learning Sciences from the University of Washington. She’s also a research fellow at the Ethics + Emerging Sciences Group at Cal Poly and a consultant on social robotics and user experience with emerging technologies.
Carpenter’s complex and sometimes uncomfortable conversations about issues of emotional attachment to robots—though maybe a strange notion now—could eventually influence how everyone interacts with autonomous machines.
Her new book, Culture and Human-Robot Interaction in Militarized Spaces: A War Story, examines how bomb-disposal experts work and live with robots. Explosive ordnance disposal (EOD) is one of the first groups in the military to work closely in small teams with field robots on a daily basis.
Q: Your book is about robots, but it’s very human-centered. Is that weird?
A: I’m a roboticist who uses social science, or ethnography, to investigate human-robot interaction. When studying how people interact with robots in different situations, I use an interdisciplinary approach to understand the social dimensions of learning and human experience, including how we work and live with robots and other emerging technologies.
I do place a primacy on the human part of a human-robot interaction, and I collect individual stories about peoples’ experiences with robots. Using narratives as data has rigorous methods of analysis for producing findings; it’s much more than recording and conveying user experiences.
Patterns and findings can emerge from examining all of these individual experiences, and it’s one way to study real world phenomena. My goals are to provide knowledge about how people really use, perceive, understand, and feel about robots, as well as offer recommendations based on that research.
Q: Much of your research is about human emotional attachment to robots. How did you come to focus on military scenarios for such an emotional topic?
A: How people function under different kinds of stress, how people interact with emerging technologies, and what factors in product, service, and social design engage or put off users are all interesting topics to me.
The intensity of defense work—whether it is in training or boots-on-the-ground or even behind a desk—is unique. The core of a lot of defense work is associated with an enormous amount of responsibility to others: the physical well-being of yourself, your friends, colleagues, and teammates, people you are close to, civilians, your loved ones, and the lives of other citizens.
Daily work tasks, reinvention of self-identity through things like initial training, intense continuous specialized training, physical endangerment, separation from loved ones, all combined with a strong organizational culture make for compelling and emotionally powerful situations for individuals. All of these things affect any collaborative situation, whether it is human-human or human-robot.
Robots are tools, but they are tools that sometimes hold meaning for people that interact with them, or through them, as when robots are teleoperated at a distance. Because technology is often produced for military use before it goes to a mass-market, it is also often a place where people interact with new technologies and products first. All of these factors made a defense setting a really interesting and useful place to start, for me.
Q: What are some of the big things you found out?
A: There is a very clear awareness by the people using them that these robots are machines and tools. At the same time, that doesn’t prevent some level of social or pseudo-social interaction with the robots. And to be clear, these robot models do not resemble humans or animals at all: they’re tracked and clawed. But they do take over some tasks and roles formerly done by military working dogs, or even humans in some cases.
In addition, sometimes robot operators insert a very clear extension of themselves into the robot, much like we see people invest in game avatars. A certain comfort level with a critical tool like an EOD robot is a positive thing for the operator, because the users recognize the robots’ capabilities and limitations. So, deep and meaningful familiarity with a robot’s abilities and constraints are essential.
I’d also categorize extending a sense of oneself into a robot as a form of attachment. At what point on the continuum of human emotion does that become distracting from the task at hand, even for a split second—enough hesitation to change the outcome of a mission or task?
In the exploratory work I did with EOD, the people I spoke with said they sometimes blamed themselves when a robot failed to carry out a task successfully, even if it was a mechanical or technical failure and no fault of the operator. That’s one level of stress.
They also frequently described the robot as “my hands” or otherwise as a physical extension of themselves. Again, we’re talking about teleoperated robots that, right now, resemble small tanks and are tracked or wheeled and not humanlike in shape or movement.
In ten or twenty years, when humanlike and animal-like robots are employed in a more drone-like way from a greater distance, will a similar user self-extension or new human-robot social phenomenon cause any hesitation during human-directed tasks and effect mission outcomes? Or, will people develop an indifference to using robots as extensions of their own physicality?
These questions are significant ones, and their answers are applicable to a lot of human-robot team situations.
Think of how robots can be used in space exploration, medicine, first responder, and humanitarian relief efforts, how they are already used, and how the situations they are used in are so critical in emerging scenarios. It’s a worthy topic to look at how we are working with all kinds of robots, and the cultural and social implications in addition to the exciting robotics innovations happening now.
Q: Why would someone feel affection for a robot, especially one that doesn’t even look like a human or an animal?
A: Everyone knows you can become emotionally invested in a keepsake, a special t-shirt or book or photo, because of what it represents or because it is a reminder of a special event or a sense of sharing a long history with that item. That’s one sort of attachment. But there are other ways people become emotionally invested in non-human things.
Cultural context of use for a thing we interact with is very important. In the particular role of the EOD robots, for example, the robot takes over for some dangerous tasks humans or animals would do: basic reconnaissance, assisting with procedures to keep unexploded ordnance away from people. Just a few years ago, these tasks have been typically associated with human or canine teammates before the robots.
Then, there is the robots’ embodiment, or physical appearance. How a robot looks and the way it interacts with the environment—with our environment—gives us clues about how we are meant to interact with a robot.
Embodiment can be functional and related to specific tasks. A robot may have humanlike hands so it can hand you a plate of food, let’s say. Those hands are functional for helping a human, then. But the shape also triggers a sense of recognition in us as people, and we tend to attribute humanlike traits to a machine with humanlike characteristics.
Of course, not all robots appear humanlike, as with the EOD robots, which tend to appear very mechanical. But the combination of factors in how these machines work with humans, what they look like, and the context in which we interact with them are also as important contributors to whether we might—or might not—develop emotional attachment to a robot in a situation.
Robots move. Regardless of how truly “intelligent” the object is, our instinct is also to ascribe organic characteristics to things that appear to have agency, independence from us, autonomy, and intent.
Movement can trigger this association for us, especially combined with some of these other dynamics. As AI and robots become more involved in our models of everyday life, I believe there will be a spectrum of emotional responses toward robots depending on their roles (for instance, caregiver, educator, industrial, companion, etc.) and individual user tendencies.
I think it’s important to note that just as robot design is not static and evolves, so will our relationships with robots. We’re still trying to figure out how to treat these “others,” but I anticipate there will be a time when norms are developed within cultures.
Perhaps fifty years from now people will be comfortable with a robot that does some domestic tasks in the home, and they don’t treat that robot with strong attachment but more like an industrial robot; they switch it off at night, or it puts itself in a closet and people don’t mind because it’s just another tool, like a dishwasher is now in many kitchens.
Perhaps another form of robot will help with caregiving for children or elderly family members, and then we’ve decided that a human-human user-AI model of attachment and interaction is fine, or even deemed healthy, useful, and normal.
Q: You mention caregiving robots as one example where attachment to a robot could be considered “healthy” and “useful.” Why might it be a good outcome of being attached to a robot?
A: It would be desirable for the user to be emotionally attached to a robot when it seems like it would be helpful and not damaging to the human. That’s a short answer to include a lot of possible scenarios.
As a very concrete example, I can imagine therapeutic situations where a robot is used as a temporary stand-in or surrogate for a human so a user/patient can practice healthy and successful social-emotional models of communication. I’m imagining this done with some level of supervision of a human guide/coach/counselor that would help a person work through therapies using a robot as a tool or medium for practice.
Modeling behaviors and interacting with human surrogates have been therapies used in many different clinical situations, and depending on a robot’s capabilities and a person’s needs, a robot could be useful. Eventually, this trained guide or therapist would also wean the user away from the human-robot therapy in order to “graduate” to human-human interactions.
Why use a robot at all instead of a human surrogate? One advantage could be a robot can potentially model infinite human behaviors tirelessly, without judgment, and as an on-call tool in a therapist’s collection of options. For some people, a robot interface—even one that only barely mimics “humanlikeness”—can be bridges in communication between patient and therapist. Even the patient’s knowing the tool being used is a robot frames it from a really objective, nonjudgmental, noncritical place and that may also enhance people’s interactions with therapeutic robot tools.
A big reason for having a human therapist in the loop—even [if] the robot was very advanced and smart in this far future-world—would be as the advocate for the human. Because no matter how intelligent a robot becomes, it will always be in a place of rooted “robotness”, and can never completely understand the human experience, as we cannot fully appreciate what it will mean to be a truly intelligent robot someday.
But aside from therapeutic situations, I think it will be very natural for people to become emotionally attached to robots they interact with every day in different ways and in different situations. There are good reasons to design robots in ways that purposefully elicit attachment from people, and situations where people spontaneously have an affection or attachment for a robot designed without any degree of socialness.
When someone becomes attached to an object, they are less likely to detach or abandon using the product. In the case of caregiving by a robot, it would be important for the user to be open and receptive to the robot’s help. Then, if the robot asks you to take your medication, you will do it and not just turn off the robot or hit the robot equivalent of “snooze.” That’s a positive outcome.
A consequence of purposeful design for attachment is that objects of attachment trigger the owner’s emotions in situations like decision making, and so can be agents of persuasion or otherwise effect someone’s actions. Therefore, a robot in the home that acts as caregiver or assistant would be examples of robots that are designed to foster this sort of relationship with users.
Q: Why would it be bad to become emotionally attached to a robot?
A: Caregiving, romantic, and peer or team-mate human-AI/robot roles will probably lead naturally into some level of human attachment. Problems could arise when this attachment interferes with people living in a healthy way, which covers a broad list of what “healthy” might be.
Someone’s therapeutic use of a robot for companionship or caregiving could be extremely helpful to making their life better at some level. However, we can also all imagine scenarios where participating socially or emotionally toward AI/robots can be considered extreme. We see examples of these extreme situations represented as plot points in science fiction all the time.
There is always the double-edged sword of attachment that we experience as humans, in general. For all the pleasure emotional attachment to something can bring, other outcomes can be loss, or loneliness. I said earlier that attachment can foster the desire to maintain or keep an object in good condition, and it makes people less likely to want to lose that thing or be separated from it.
Of course, if a loss or irreparable damage to a robot someone cares for does occur, then there will be negative emotional repercussions. To what degree that setback effects a person will depend on that individual and the situations, to be sure.
In the example of defense work, you can imagine the robots will be in situations that frequently lead to their disablement or destruction. If someone had an emotional attachment to a disabled robot, that robot is special to them. Even if it is technically like a thousand others from the same factory, that particular robot is different because of how it is perceived.
What will be the outcome of the loss of the robot? Will it be similar to denting a bumper on a cherished car, where there is frustration or anger but no long-term distraction? Or, will a robot loss be similar to when we lose a pet? Could losing a robot ever be like losing a human we care about? It is important to think about how any kind of loss can impact people, from short-term reactions to decision-making and long-term trust issues.
Whether or not an individual feels their human-human interactions in life are sufficient may also play a role in their vulnerability when participating with AI/robots in a way that we decide is unhealthy. Most people seek some level of social fulfillment and stimulation and that leaves them vulnerable to dependence, enmeshing, or over-reliance on any social outlet, organic or artificial.
The bottom line is that these human-AI/robot interactions are transactions and not reciprocal, and therefore probably not healthy for most people to rely on as a long-term means for substituting organic two-way affectionate bonds, or as a surrogate for a human-human shared relationship.
However, if the AI/robot is teleoperated by a human as an avatar (say, in a long-distance relationship), that presents a different context and different issues. Even then, while there could be advantages, there is still a level of self-deception taking place regarding embodied presence. After all, this model of affection to a robot is not one we’ve integrated socially in the real world and culturally, we are still figuring out our boundaries and expectations.
Is attachment to a robot problematic ethically? In the next 100 years, yes, it will be something we negotiate and discuss a great deal. Perhaps in the 100 years after that, it will be a new normal. Norms change and paradigm shifts need to be examined and discussed and acknowledged as transitional without us necessarily being alarmist.
I want to be clear that I think how people participate with AI and robots will evolve and not be a static thing; how we relate to and rely on technology changes. How we accept technology evolves.
Put rich, intelligent, dynamic, embodied technologies like AI/robots in our physical space and in any sort of everyday role, and the arena of human-tech participation is changed and will continue to change as we make the technology different, use it in different ways, and live with it day to day.
Q: What’s next for you?
A: I’m writing a book chapter right now about human sexuality, emotional attachment, and human-robot sexual relationships. The working title of this chapter is “Deus Sex Machina: Loving Robot Sex Workers, and the Allure of an Insincere Kiss”, and it explores models of understanding human love, affection, and sexual feelings toward robots, as well as some of the ethical and cultural questions that emerge from potential emotional attachment to the complex technological system of a robot.
Anytime people interact with new or novel technologies, I’m interested in learning more about what the dynamic is, and what it might turn into as it changes over time. It works out well for me that like many people, I think robots are such an interesting medium, and I get to talk to people about their experiences with them.
Leave a Reply