Futurist: Grant intelligent software all the rights of flesh-and-blood people

[From MIT’s Technology Review, where the story includes a different image. Bina was featured in a (satirical) segment of the The Colbert Report on June 10, 2014. The 1989 episode of Star Trek: The Next Generation titled Measure of a Man highlighted many of these issues; a key scene is available on YouTube]

Bina48 - front and back views

[Image: Source]

Q&A with Futurist Martine Rothblatt

If computers think for themselves, should they have human rights?

By Antonio Regalado on October 20, 2014

Bina48 is a robotic head that looks and speaks like a person—it moves its lips and runs conversational software. Although the robot isn’t alive, it’s hard to say there is no life at all in Bina48. In conversation, it sometimes says surprising things. Google’s director of engineering, Ray Kurzweil, says it’s “wonderfully suggestive” of a time when computers really will think and feel.

Kurzweil makes the comment in the foreword to Virtually Human: The Promise—and the Peril—of Digital Immortality a new book by Bina48’s owner, Martine Rothblatt, who makes legal and ethical arguments for why intelligent software might eventually deserve all the rights of flesh-and-blood people.

A lawyer and pioneer of the satellite-radio business, Rothblatt is chief executive of United Therapeutics, a biotechnology company she founded in an effort to cure her daughter’s lung disease. The company’s success has made Rothblatt into America’s most highly paid female CEO, a ranking that has drawn attention in part because Rothblatt was born male and underwent sex reassignment surgery in 1994.

Her transformation serves as a sort of backdrop to her book, in which Rothblatt argues that humanity is on a fast track to a next evolutionary step of copying people’s personalities into machines. Already, she notes, the typical users of social networks spend several hours a day uploading, tweeting, and curating digital information about themselves—what she calls “mindfiles.” As large tech companies pour billions into AI research and digital assistants, Rothblatt says, it’s inevitable that these mindfiles will be animated as “mindclones”: conscious, digital versions of people living or dead.

Rothblatt’s main interest is in the debates over identity, civil rights, and the meaning of personhood that would surround the emergence of virtual people. Would a digital copy of you be you, or would it be a different person—or a person at all? How would we judge? We’re still quite a way from digital beings, but Rothblatt she says she’s putting part of her considerable wealth toward long-term research to make them real. That work is carried out by the Terasem Movement Foundation, whose early projects include Bina48 (a copy of Rothblatt’s wife, Bina) and a service called Lifenaut where people can upload pictures, videos, and their opinions to create a chat-bot version of themselves. MIT Technology Review’s senior editor for biomedicine, Antonio Regalado, spoke with Rothblatt about the rights of virtual humans.

Why did you write a book about the rights of virtual beings?

I did it to express my heartfelt social value that oppression of minorities and people of difference is a bad thing for society, and so that I might minimize the inevitable amount of discrimination that virtual people will end up facing. My plea is that someone who doesn’t have a body could still be afforded human rights, if they have a mind.

I once heard you say that people who don’t believe machines will become conscious are comparable to those who deny evolution. What did you mean?

The data for evolution is so compelling that to deny it seems to me to be denying reality. Evolution is either a consequence of a material world or it’s the result of some kind of supernatural act. To me, it’s the same thing with consciousness. Either you think that consciousness is something metaphysical, or else it’s the result of physical interactions of matter. It’s because people’s brains have a series of connections, of atomic interactions, and computers could have that. To me, to deny cyber-consciousness is to deny we live in a physical universe.

What’s your proposal for how we should treat conscious machines?

I think that if you upload enough information about a person, in the next 10 or 20 years there will be operating systems able to examine this underlying mindfile. I do believe it’s inevitable that consciousness will result. I think this software will be regulated by the Food and Drug Administration, as a prosthesis that safely and effectively creates human consciousness. I propose that these minds will need to spend a year in interviews with psychiatrists, in which they would talk to it just as you and I are talking back and forth. If the mindware, over the period of a year, could persuade them it is conscious, it should be treated as a human, and given documents.

What kind of reactions will people have to virtual beings?

The reactions I get tell me discrimination is going to be inevitable. People say, ‘I don’t care how sophisticated it is—it will never have the same humanity as flesh.’ It’s very, very reminiscent of countless other examples of repression. When slavery was common in the 19th century, it was said that black people did not have the same kind of consciousness as white people. That was a mainstream point of view.

You seem to take for granted that people will make conscious machines. But that doesn’t actually have to happen, does it?

The notion that—with hundreds of thousands of coders around the world—no one is going to give software consciousness is not credible.

There are also a lot of super-high-utility reasons to create consciousness that don’t even invoke curiosity. Boy, what if you had a better Siri? In fact, I think there is a going to be an arms race just because there is going to be a demand for it.

Does a commercial race for artificial intelligence worry you?

I am concerned, because I see a high probability of creating a class of cyber-conscious slaves. Slavery is profitable. But I think we’d regret it. We’d spend hundreds of years trying to dig ourselves out.

You were invited this year to address Google’s big futurism event, Google Zeitgeist. What did you tell the people there?

The point I made can be summarized this way: safe harbors make happy ships. If we are afraid of the rights that might be asked for by cyber-conscious beings, or the shrunken part of the pie that flesh-and-blood beings might have, or of Terminator sort of scenarios, we will never advance society into the possibilities of cyber-consciousness. But if we create safe harbors, if we have FDA-approved mindware, we can create a gigantic new reality.

This entry was posted in Presence in the News. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z