Researchers in Japan make android child’s face strikingly more expressive

[The refinement of the robot child Affetto’s ability to evoke presence in just 7 years suggests that we may cross the Uncanny Valley in the not-too-distant future. This story is from Gizmodo, where it includes three videos, including a looping video with both versions of the robot; for more information including links to other videos, see the Osaka University press release. –Matthew]

You’ve Come a Long Way, Disembodied Robot Baby

Jennings Brown
November 16, 2018

Over the last seven years, Affetto, a “child-type android” created by researchers at the University of Osaka, has become far more lifelike.

In 2011, Affetto was a taut-skinned doll with a moving mouth and eyes—no more lifelike than an animatronic Cabbage Patch Kid. Today, the robo-boy can convey emotion with skin that seems to contort naturally—looking something like a Japanese Chucky doll, minus the body. In a new paper, the researchers behind Affetto explain their technique for upgrading their latter-day Pinocchio.

When the original Affetto was revealed seven years ago, many tech outlets were impressed by its realistic facial expressions, but in a paper published in Frontiers in Robotics and AI last month, the researchers explained the importance of developing a face that was even more lifelike:

“Faces of android robots are one of the most important interfaces to communicate with humans quickly and effectively, as they need to match the expressive capabilities of the human face, it is no wonder that they are complex mechanical systems containing inevitable non-linear and hysteresis elements derived from their non-rigid components… However, to date, android faces have been used without careful system identification and thus remain black boxes.”

In order to give Affetto’s face more humanistic expressions, the researchers developed a version with a lot more pneumatic actuators behind the skin. Then, instead of fine-tuning each one, they plotted out 116 points on one side of the face and studied how they moved, measuring three-dimensional motion.

That allowed them to develop a system they could apply to all the actuators to advance the realism of the facial movements through better control of the synthetic skin.

“Surface deformations are a key issue in controlling android faces,” said Minoru Asada, one of the researchers, in a statement. “Movements of their soft facial skin create instability, and this is a big hardware problem we grapple with. We sought a better way to measure and control it.”

The latest version of Affetto also has an asymmetrical face, most noticeably in the eyes. The new Affetto is somehow less creepy than its rigid gloss-faced predecessor, but so much more unsettling.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z