Generative AI has made it possible to create lifelike models of real people. Should we?

[Although the topic seems very far from, and ironic in the context of, the grim realities of war in the news, this very thoughtful essay from Boston Review uses speculative fiction and thought experiments to illustrate some of the many complex psychological and behavioral processes and ethical considerations involved as we (at least those with the resources) consider how we should use technologies that create “lifelike models of real people” that evoke medium-as-social-actor presence. –Matthew]

[Image: A still from “Be Right Back,” a 2013 episode of Black Mirror in which a woman digitally resurrects her deceased boyfriend.]

Our Avatars, Ourselves

Generative AI has made it possible to create lifelike models of real people. Should we?

By Mala Chatterjee
September 29, 2023

In 2013 an episode of the British TV show Black Mirror imagined a harrowing possibility: that technology might allow us to recreate the dead. A young woman named Martha loses her partner in a sudden accident and copes with the loss—or perhaps refuses to cope with it—with the assistance of an unnerving artificial intelligence, trained on her partner’s data footprint, that can speak, act, and appear exactly as he did. It’s not long before Martha becomes obsessed with the emulation, gathering every remnant of her partner’s life to incorporate into the model and spending hours on the phone conversing with his “voice” rather than with her living loved ones.

Perhaps the most gut-wrenching aspect of the episode is how much of it might be empathized with. It’s easy to understand Martha’s desperation to escape the pain of losing a loved one. Who wouldn’t be tempted to do so by clutching onto such an avatar—even while knowing that to deny the finality of death would be the ultimate act of self-deception, a refusal to accept what it is to be human?

A decade ago, this prospect may have felt fantastical. But ChatGPT and other tools powered by large language models have made it a reality, generating uncannily human-like dialogue and inhabiting particular roles, perspectives, and identities on command. Just this week OpenAI announced that ChatGPT will now have voice and image capabilities enabling it to “see, hear, and speak.” Other tech companies promise services like hyper-personal assistance, low-cost legal analysis, and on-demand therapy sessions. And one app in development, Talk To Your Ex, even promises to offer what Black Mirror only imagined: chatbots trained on text messages to behave like ex-partners, former friends, and deceased loved ones.

Beyond large language models, the tools behind “deepfakes” are proliferating increasingly sophisticated synthetic visual representations of real individuals. Recent breakthroughs in voice cloning software provide the aural threads for stitching together deepfakes and chatbots into unified virtual beings, ones that can induce in us the sensory and cognitive experience of particular people. In short, we now live in a world with effective, accessible tools for constructing artifacts that don’t look, sound, speak, or act like artifacts at all. Rather, they can purport to be synthetic counterparts of real people, ones who lived or are still living very real lives. And they pose a pressing new question. When does making and using such a representation violate or wrong the person being represented?

—-

This question is more puzzling than it may seem. Consider a thought experiment, not unlike the Black Mirror episode. You have just ended a relationship, but your partner struggles to accept the breakup. They gather up your publicly available and privately shared data footprints as raw materials and set out to create a synthetic and interactive representation of you. Soon they are spending all their time with this avatar, living out everything they wanted to share with you but never had the chance to—including many things you wouldn’t have wanted or agreed to do yourself.

Imagine further that after telling others about how wonderful your avatar is to experience and engage with, your ex starts sharing it with friends and makes it a popular companion within their social network. And finally, imagine that your ex polishes up and brands your synthetic incarnation and makes it widely available to the public, either freely or at a price.

Where, if at all, does the violation occur? Among people I’ve asked, answers vary widely. Some do not feel violated by the creation and private use of the avatar so much as its being shared with others. Others think the violation arises from third parties conjuring up the avatar instead of the person with whom the consensual relationship was shared. For still others, the violation consists in the avatar being monetized—made available for licensing or purchase on the marketplace by someone other than themselves.

Wherever the violation is felt to be located, it’s worth noting that these intuitions do not depend on the avatar being used as a tool for the commission of some distinct and familiar harm—say, defamation of the represented party. They appear to assume some weighty, perhaps ethically distinct, connection between one’s self and one’s avatar. At the same time, there is a vantage from which this intuition looks almost absurd to me and impossible to vindicate, as it seems irreconcilable with many of our most familiar and cherished practices.

To see why, just tweak the thought experiment: put yourself in the position of the former partner—the resurrector—desperate for the companionship of someone lost. What are the permissible ways to nurse our wounds? It cannot be that we are required to rid ourselves of relationships’ relics at the moment they end (what would this even mean, really?), as though we must all undergo targeted lobotomies of the Eternal Sunshine of the Spotless Mind variety. At a minimum, our memories live on—themselves a sort of resurrection—and besides, relying on relics is a practice as old as relationships themselves. The ways we cope with loss are ever-evolving. Until fairly recently in the course of human history, after all, it was not the case that we could mourn and relish in past moments with the vivid assistance of photographs or footage.

How could it be, then, that training an avatar on these materials can give rise to a violation? Why should engaging with a mere product of these materials implicate rights and interests that engaging with the materials themselves does not?

There might seem to be a core difference between re-experiencing a person’s consensually shared content—old texts, images, and videos—and using those remnants to disseminate new content appearing as theirs they had no opportunity to reject. But the line between revisiting and reimagining is far blurrier than it might initially appear. Within one’s mind alone, recollection and revision blend together smoothly. It is said that we might alter our memories every time we revisit them—and these recollections bleed right into fantasy when we imagine things that could have gone differently or never occurred at all. Can there be a fact of the matter as to where the normative boundaries of myself end, and where your vision of me is permitted to begin?

—-

We might attempt to distinguish merely indulging fantasies inside one’s thoughts from taking actions in order to thicken those experiences. What goes on within one’s mind is fair game, the line would go, which is to say that others’ interests can only be implicated once affirmative steps are taken to give the fantasy some reality beyond itself. On this picture, there are no merely mental violations, and freedom of thought is preserved—but if we do project the imaginings into the external world and play Dr. Frankenstein by animating them into something concrete, we may then wrongfully divert the identity of the reanimated person into our fantasy against their will.

But even this solution strikes me as unsatisfying. We generally find it acceptable to give some of our imagination concrete reality, at least in certain familiar ways. Privately, we can write about each other in our diaries, and might even be encouraged to do so as a therapeutic mechanism for processing emotions and desires. How exactly is this different from speaking with a simulation? Publicly, it is common and sometimes even celebrated for us to use our relationships as raw material for works of art. The pieces of one’s life that might be creative fodder for another are often pieces of that creator’s life just as well—shared experiences in the literal sense—which is to say that a conception of ourselves and our stories as neatly siloed off from others is a nonstarter. Many do think it’s permissible, at least sometimes, to transform those we have known into our own fictionalized incarnations. To that extent, the line-drawing puzzle remains.

There is something particularly unsettling about the interactive nature of resurrections made possible by generative AI. These avatars feel more continuous with our personhood—like branches of us extending out into the world—while simultaneously feeling more hollowed out and manipulable: puppets forced to perform in whatever way the resurrector may please. But characters in novels are also made to act and react by their creator; they are simulations arguably even more at the mercy of a freely writing author than ones constrained by data and algorithms. Is the world of fiction, constructed from our reality’s fragments, really so cleanly separable from the one an artificial resurrector creates for herself? Or are artificial resurrections nothing more than dynamic fictional characters, albeit ones penned by mathematical models rather than by human authors?

The depth of this puzzle is laid bare by yet another version of the thought experiment. Imagine now that your ex artificially resurrects you, but an edited you that’s more aligned with their tastes and preferences: say, less argumentative or more interested in BDSM. To me, there is something disturbing about such a redesigned resurrection, as though imbuing my identity with traits contrary to my own is itself a further violation. But it also makes the artifact less like a version of me at all, and more like a thing of fiction. At what point would this simulation cease to be my resurrection, incorporating and manipulating my own will, and instead become something that simply manifests and expresses the will of someone else?

In the end, perhaps there really isn’t a principled line for us to draw. Maybe there are just varying degrees: shades of invasiveness and fantasticalness in the ways we recreate and re-experience each other. Artificial resurrections do seem to expand the range of these gradations, but the ranges themselves are wholly familiar. While most would not find it invasive for their ex to revisit old texts and voicemails for private pleasure, some might feel differently if the ex-partner continued to look at intimate photographs that were shared within the relationship. Many would find it even more invasive for these photographs to be manipulated into new experiences altogether. But this needn’t mean the manipulations constitute something normatively new.

The degree to which something is invasive, fantastical, or violative also seems to be a matter of our ever-evolving norms. The uncanny valley we face now might really be a slowly shifting glacier, which is to say that our instincts around when artifactual representations go “too far” change over time. In a world where artificial resurrections are routinely created, the extent to which we feel that they constitute violations could also dwindle. We might come to believe that the mere act of engaging in an intimate relationship means accepting the risk of resurrection, just as we already accept the risk of becoming artistic fodder. And yet, since the path of least resistance might lead us to this future, we must contend with these questions now. But can we trust our present intuitions, or are they hopelessly biased toward preserving the status quo?

All this is to say nothing of the dead. Might recreating them, as Martha does in Black Mirror, be a violation? The question implicates a philosophical puzzle that traces as far back as the reflections of Diogenes: How can the dead be wronged or harmed at all, when they no longer exist?

This puzzle complicates an otherwise tempting response: that so long as such artificial resurrections are made with the consent of the resurrected person, there is nothing wrong in creating and using them. The dead cannot consent to being artificially resurrected, but is consent unnecessary from those who have died? Moreover, even if one consents to some instance of resurrection in the future, we might think this consent ought to be revocable. If I see you using my artificial avatar in ways I didn’t anticipate or find unsettling, I should have the right to change my mind and ask you to stop. Our stake in our own resurrections may not be like our stake in our property so much as our stake in our own bodies: we may consent to their use by others—say, for labor or for intimacy—but are entitled to revoke that consent whenever we please.

This vision would not cohere with a world in which resurrections can be bought and sold on the marketplace. It would require a world constrained by inalienable individual rights to license our resurrections and revoke them at will. But even if we did start adopting standard-issue consent agreements—say, including resurrection provisions in wills alongside do-not-resuscitate orders and organ donor statuses—the fact remains that the dead’s consent can never be revoked. Should a decision we make while alive doom us to the threat of eternal recurrence, forever at risk of being dug up from the grave to dance at the whims of the living?

—-

I don’t see easy answers, but the puzzle itself sheds light on our values. Artificial resurrection reminds us that we understand ourselves as constituted by the identities we self-author but also by the representations of us made and experienced by others. We might feel entitled to some creative control over these avatars, but our relationships and shared experiences enmesh us, as we mutually author and perform in each other’s parallel personal narratives. We long to curate others’ representations of us, but also expect the freedom to construct our own representations of those around us. The challenge is to structure a world where these claims can coexist. If artificial resurrections do extend us, they also further enmesh and entangle us.

That said, there are deeper reasons that reflecting on artificial resurrection is so challenging and unnerving for us—ones beyond the question explored here, but which also call for reflection. Much of the disquieting impact of the Black Mirror episode stemmed from its seductiveness: the palpable temptation to chase the dragon, if one can, and manipulate oneself into still experiencing the presence of a loved one after they’re gone.

Artificial resurrection is alluring as a possible antidote for our existential discomfort with loss, change, and mortality—a discomfort that’s grounded in how deeply we value the connections and shared experiences we stand to lose. But the irony is that this antidote could prove so alluring that it ultimately corrodes our ability to connect and share experiences at all. We are tempted to extend and enhance relationships precisely because we are so attached to them, and we long to resurrect those we lose because we feel they are irreplaceable. If we follow these instincts too far, we might create a world with little room left for the very thing we sought—a world filled with resurrections, maybe, but where little new grows that we’d care to resurrect.


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives