After creepy laughing episodes, should Alexa be a computer or a person?

[A recent series of reports of Amazon’s Alexa spontaneously laughing has raised questions about whether machines should be designed to emulate humans. This story from Popular Science argues yes. A counter view can be found in “Alexa’s Creepy Laughter Is A Bigger Problem Than Amazon Admits” in Co.Design:

“Alexa’s recent case of the ‘Ha Ha Has’ underlines the uncanny valley between Amazon’s Alexa and any real human. It’s why Star Trek poked at the inabilities of the logic-based aliens known as Spock and Data to understand humor. Humor is an insanely complicated topic considered intrinsic to human evolution. It’s why knowing if to laugh and when to laugh, even with a close friend, can still be challenging at times for any of us!

In 10 or 20 years, maybe chatbots will master these social graces. But until they do, Amazon could have avoided its weird laughing problem entirely simply by positioning Alexa’s own personality as a computer you can talk to, rather than a cyborg bestie. Simply put, Alexa should be an ‘it’ rather than a ‘she’ because Alexa is nowhere near human, and bound to let us down again (or scare us) when we expect it to act like us.“

Note that both views suggest it’ll be fine for Alexa to behave like a human being when ‘she’ can do it more effectively. –Matthew]

Alexa should laugh more, not less, because people prefer social robots

Services like Alexa and Siri work best when they emulate humans.

By Rob Verger
March 8, 2018

Alexa, Amazon’s virtual assistant, was laughing when it shouldn’t. You might have seen tweets about it: The weird, disembodied chuckle bothered people for reasons you can imagine, as well as because it reportedly could happen unprompted—our assistants, after all, are only supposed to listen and speak to us after they hear the wake word. We want them to tell us the weather and set kitchen timers on command, not spook us with laughter. (Amazon has both acknowledged the problem and pushed out a fix.)

We all know that virtual personas like Alexa, Siri, and the Google Assistant are not real humans. They can’t laugh the way we laugh, because they are not alive. But it does, in fact, makes sense for them to try to be social and emulate our behavior. They just need to get it right. Telling jokes, for example, is not an essential skill for an assistant like Alexa to have, but it will still cough one up if you ask.

“We’re social beings—we should build machines that are social as well,” says Timo Baumann, a systems scientist in the language technologies institute at Carnegie Mellon University. He researches the interactions that occur between people and virtual assistants, and between people and other people. “Our research shows that it’s super important to try to build a relationship with a user,” he adds. Ideally, there should be a rapport there.

At Carnegie Mellon, for example, Baumann tested a virtual agent—an animated one that participants could both speak with and see on a screen. Its job was to give movie recommendations, and there were two versions: one was programmed to be more social than the other. “They [the study participants] liked this more social system significantly more, and they liked the movie recommendations significantly more,” he says. That was true even though they hadn’t changed the part of the system that made the film suggestions—it’s just that the more social persona resulted in higher ratings.

Of course, anyone who has interacted with an assistant like Alexa or Siri knows that their social chops are works in progress. “The critical problem is that being social is super difficult,” Baumann says—the best they can try to do now is approximate it. He adds: “If you’re not correctly social, then it’s creepy.”

In fact, truly being social is different from telling a joke on command or laughing when asked, Baumann argues. Ideally, a virtual agent could do things like read the mood of the person with whom they are speaking, and adjust accordingly. That’s much harder to do than simply regurgitating a joke.

But while those more advanced moves aren’t here, it still remains important that the assistants we do speak with interact with us in ways that create rapport. “The machine is trying to build trust,” he says. “Destroying trust is so much easier than building it.” On a simpler level, that’s what [is] happening when an assistant like Siri or Alexa responds to an inquiry correctly—or flubs it.

In the case of the weird laugh, Amazon has provided an explanation, and made an adjustment. A spokesperson said via email: “In rare circumstances, Alexa can mistakenly hear the phrase ‘Alexa, laugh.’ We are changing that phrase to be ‘Alexa, can you laugh?’ which is less likely to have false positives, and we are disabling the short utterance ‘Alexa, laugh.’ We are also changing Alexa’s response from simply laughter to ‘Sure, I can laugh’ followed by laughter.”

If you’re curious, the new laughter sound in this case is “teehee.” And while the fix doesn’t exactly mean Alexa is now suddenly perfectly social, at least it’s no longer being antisocial in this particular way. But if a virtual assistant could read your mood and maybe even laugh at the right time, the way a friend would, that might not be creepy. It might be friendly.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z