When Taking Robots Global, One Size Does Not Fit All
Even robots can get lost in translation. PCMag talks to Dr. Selma Sabanovic about the challenges of taking them global.
By Sophia Stuart
October 2, 2016
Whether it’s robots that dance, robot coworkers, sociable trash cans, or therapeutic devices, Dr. Selma Sabanovic from Indiana University has seen it all.
Dr. Sabanovic, an associate professor at the university’s School of Informatics and Computing, has been examining international robot development for many years. However, as her research has proved, most robots reflect their country of origin, and people’s responses to robots can be influenced by their cultural background. This could be a problem for the emerging global industry as, in purely economic terms, it’s hard to make serious cash as a manufacturer through exports if your robot gets lost in translation.
Sabanovic was in New York recently for RO-MAN 2016, the IEEE International Symposium on Robot and Human Interactive Communication, and talked to PCMag about the implications of our silicon cousins traveling outside national borders.
Dr. Sabanovic, in your PhD thesis you explored how different global regions draw on cultural traditions when making robots, most particularly in Japan where Shinto gives rise to animism [the belief that the inanimate possesses a spirit/soul]. Hence, the Paro robot comes with a birth certificate.
You read my thesis?
I did my best.
(laughs). I’m flattered. Yes, there are many discussions in robotics circles and popular media about how animism has affected Japan’s robot industry and can be traced back to the early belief from the Edo period [1600-1867], when the mechanical puppets, known as Karakuri Ningyo, were thought to be ‘alive.’ I was particularly interested in the way this origin story for robots was translated into their design as social creatures, and into specific applications for use in society, both of which are common goals in Japan. I also looked at how specific cultural understandings of what it means to interact socially with others—the right way to present yourself, show your emotions, etc.—become part of robot design.
When did you meet your first robot?
Unsurprisingly, in Japan. I’m originally from Sarajevo, Bosnia, and during the war my family moved to Istanbul, where I completed my studies. When I was about 8 years old, I went on a trip to Japan with my father, who is an electrical engineering professor and works on motion control for industrial robots. So that was my first exposure to robotics, but those robots were huge and definitely not designed to interact with humans!
In contrast to your primary research, which is all about robots that are intended for human interaction and assistance?
Yes, I became fascinated with robots as the field of social robotics emerged. My initial course of study was in social science [political science and international relations], and as the two fields started to converge I was intrigued by the intersections and contradictions between robots and our social lives.
As part of my PhD research I was based in the lab run by Dr. Takanori Shibata at AIST and at CMU’s Robotics Institute, studying how roboticists design sociality into machines. Later on, I started looking at adoption of, and responses to, socially assistive robots in both the US and Japan, using the Paro robot and other robot prototypes we developed in the lab.
We have seen that people’s cultural backgrounds can affect how they evaluate robot emotions, groups of robots, and how they might incorporate robots in their midst. In our studies with Paro in the US, we have seen that gender normative behaviors show up in people’s interactions with the robot.
We also noticed that people actually have to do a lot of work to make what we see as ‘autonomous’ robot functioning successful—people have to support and work with robots to get the right effects from them. This reframes the problem of social robotics from designing a robot, to designing the interaction and social context in which people and robots meet, which is an issue more compelling to social scientists. I have also seen how incorporating robots into human environments can change the meanings that people give to things like culture, social relationships, technology, etc.
And now you’ve taken that work further to examine deeper cultural issues globally?
Yes, my work was funded by a [National Science Foundation] award, to look at how roboticists are designing sociality into machines and how humans respond to them in different regions. I became curious about whether there was a disconnect between the cultural ideas scientists were building into these robots and society’s expectations.
I recently received another NSF award with my collaborator, psychologist Eliot Smith, in which we will be looking at group interactions between humans and robots in the US and Japan. We’re expecting to see some differences in how people interpret and respond to robot groups based on the collectivist and individualist cultural contexts in these two countries.
It’s true there are huge cultural differences between robots from different countries. I covered the DARPA Challenge and the ones from the USA looked like Terminator, whereas the winner, Kubo from Korea, was an agile almost athletic model that emerged easily from driving its own modified SUV.
That is exactly the type of differences I noticed in robotics design. Since then, my students and I have also been doing experimental studies to understand how people respond to such different designs. In one project, we tried to identify culturally salient cues and examine how people respond to ‘robot emotion,’ for example, in developing human-like facial expressions to aid interaction.
In one study we looked at the ’emoticon hypothesis,’ comparing Asian [very detailed, heavily eye-focused] emoticons to those in the US [simpler, more mouth-based] Western emoticons as a guide to developing robot ‘faces’ and expressions. The premise was that in Japan people will pay attention to eye cues and not so much to the mouth, and our findings partially supported this idea.
We are also investigating how we can present groups of robots to people in different countries so that they do not perceive them as threatening or are able to work with them more successfully. The aim of our work is to directly systems design in HRI [human robot interaction], as well as inform the ways in which we implement robotic technologies in society.
Interesting work and definitely helpful for those attempting to make robots that work in all parts of the globe. Which region interests you next, in terms of HRI robot culture?
As someone who has lived in Turkey, which is at a crossroads of many different cultures and traditions, robotics in Muslim countries in the Middle East intrigue me. Especially because, in the Islamic culture, representing human forms is not acceptable, but in places like Qatar they have used androids. One of my students and I did a study which compared attitudes towards robots in Turkey, Korea, and the US. And Turkey was in between the two, with its ties to both West and East, and its own shamanist background, so I’d be curious to examine the connections between Islam, local traditions, and people’s interpretations of robotic technologies more.
Thank you for your time, Dr. Sabanovic. We look forward to interviewing you again when you’ve emerged from your journey into the Middle Eastern robotics culture.
See you then!