This dystopian wearable detects AIs pretending to be humans

[The tool described in this story from Motherboard addresses, and hopefully will raise awareness about, a key ethical challenge of evolving presence-evoking technologies: the obligation to make presence experiences voluntary. It reminds me of the tools, such as ad skipping features in DVRs, developed in the conflicting efforts of advertisers to reach consumers and consumers to avoid forced exposure to advertising messages, but with potentially much more far-reaching consequences. –Matthew]

[Image: Flickr/Abode of Chaos]

This Dystopian Wearable Detects AIs Pretending to Be Humans

The ‘Anti-AI AI’ will literally send a chill down your spine.

Jordan Pearson
May 26, 2017

As AI algorithms that can impersonate the human voice get better and better, it might not be long before you pick up the phone and genuinely can’t tell if you’re talking to a human or a machine that’s been trained to sound like one. In the far future, you might not be able to tell the difference in person, either.

In this vision of a future filled with computers pretending to be people, you might need a device like the Anti-AI AI. Designed as a fun proof-of-concept by Australian firm DT, the device uses essentially the same algorithms that impersonate human voices to detect if you’re being spoken to by a computer. If the Anti-AI AI clocks some synthesized vocal patterns, it uses a thermoelectric plate to literally send a chill down your spine. The prototype isn’t pretty at the moment, but a video shows the device discriminating between a recording of the real Donald Trump, and an AI-generated impersonator.

DT’s mock-ups of the Anti-AI AI envision a sleek wearable that rests behind the ear. Imagine: It’s the year 2060 and you’re talking about your day at the cricket farm with a new barista at your usual coffee spot. Suddenly, you feel it working its way down your goosebump-covered neck. This isn’t a person you’re conversing with.

Or, maybe it’s 2023 and you’re listening to the news. The anchor throws to some new, damning audio from America’s latest horrorshow president. You’re shocked, but doubly so when your AI speech-detecting wearable goes off.

Since DT is based in Australia, I couldn’t ask them about the device, but a blog on its website says that it took its team five days to create a working prototype of the Anti-AI AI using some popular machine learning tools. Its architecture seems pretty simple: The device itself streams audio via an iOS app to a cloud-based deep learning model that uses Google’s TensorFlow platform for AI developers. The AI model, the post says, was trained on samples of AI-synthesized voices to learn how to detect them.

It’s not clear how accurate DT’s Anti-AI AI is, but since the AI model (and the entire system) was apparently hacked together in a couple days, it’s probably not very good. To DT’s credit, they note its a work in progress and posted all their code to GitHub, a site for coders to work together on open-source projects.

But even if the Anti-AI AI never becomes more than a curiosity or a thought experiment, I can’t help but think that one day we might need something like it.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z