Designing robots to account for intriguing ways humans interact with them

[This story from CNBC highlights the need for robot designers to carefully consider the range of responses, including medium-as-social-actor presence responses, their creations will evoke. For more on this topic see the recent story in Psychology Today’s blog “How do We Read Emotions in Robots? Of social robots, innovation spaces, and creatively trying things out.” –Matthew]

Next-gen robots: The latest victims of workplace abuse

  • Robots must contend not just with internal flaws and bugs but with humans.
  • Recent introductions of robots to everyday scenarios have led people to initiate some intriguing forms of interaction.
  • Knightscope’s security bot, for example, has been harassed by kids, painted in red lipstick and used as a canvas for graffiti artists.

Mike Juang
Published 9 Aug 2017 | Updated 11 Aug 2017

With jobs it’s oftentimes not the work that’s difficult, but the people.

Take STEVE, for instance. Throughout his career his brothers have been knocked over by drunkards, bullied by schoolkids and even sprayed with graffiti.

But STEVE is not a person. He is an autonomous security robot resembling a cross between a rocket ship and R2D2 and is officially called the K5 Security Technology Enhancement Vehicle — STEVE, for short.

Hiccups, bugs and public failures are an inevitable part of the deployment of any tool in the real world, but robots must also be designed to account for sometimes unpredictable human interactions.

“Social robots, if they’re engaged in a public sense — even in a limited public sense — the design has to include considerations for social interactions,” said David Harris Smith, associate professor at McMaster University’s Department of Communication Studies and Multimedia in Canada.

Smith knows this firsthand. Together with associate professor Frauke Zeller of Ryerson University in Canada — and a cadre of other scientists, artists and engineers — he developed HitchBOT, a robot designed to travel across a country by “hitching” a ride from friendly humans. Completely immobile, HitchBOT was designed with an LED “face,” a hitchhiking thumb and the ability to respond to simple voice questions. Creator Frauke Zeller said she wanted to create the impression that HitchBOT “is a helpless robot and challenge people to become active and engaged.”

The HitchBOT experiment came to an end after the bot was found dismembered and destroyed in a Philadelphia alley. With the dawn of the everyday robot age, destruction is a necessary part of creation.

“In terms of designing these robots, we have to take a step back and have people decide,” said Zeller. The key question is how — or if — we actually want to live with robots in our midst.

“We’re really dealing with some deep-seated cognitive processes when we start mixing humans and robots,” Smith said. Humans empathize with less-capable creatures and conversely fear hyper-capable creatures, epitomized by pop-culture robots like Terminators and even Transformers. Designing friendlier robots requires an element of what is called participatory design.

“If it’s gonna be in the context of engaging with human co-workers … there needs to be some kind of co-design in their spec, and that affects how it will fit in,” Smith said.

Knightscope recently made headlines when one of its security bots was found drowned in a shopping mall pond — slippery surfaces had prevented the K5 from detecting stairs leading to the recessed water. But the company — whose machines are designed to supplement rather than supplant human security guards, and take care of monotonous tasks like scanning license plates in a parking lot — also has a closer view than most of the emerging and unpredictable world of robot-people interactions. The Knightscope bot was designed to be a “really smart eyes and ears that are mobile in nature,” but because the robots occupy such a public role, problems sometimes arise not with the machines but with the humans.

“People have tried to tag [the robot] or put graffiti on it,” said Knightscope CEO William Santana Li. “We have some machines that literally have bright red lipstick on them,” Li said.

Humans would sometimes see the robots and try to kiss, hug and ultimately hold on to them, a scenario no testing could ever have prepared the robot for. “You and your teammates and all the technical staff would not have sat in the conference room and said, ‘Hey, we need to come up with a hug test,'” Li said.

As in the case of HitchBOT’s demise, Knightscope robots have been subject to abuse. A robot assigned to patrol the outside of a library would be swarmed with kids that would try to play with the robot, blocking its path and even forming a human chain surrounding it. Li said the solution to that problem ended up being more psychological than technical. Knightscope programmed the robot to simply stop.

“Do nothing, turn off the sound, turn off the lights, and just sit there. And guess what, after a couple minutes it stops and the kids go away.”

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z