clock menu more-arrow no yes mobile

Filed under:

Are Hosts, Replicants, and Robot Clones Closer Than We Think?

Science-fiction storytelling—from ‘Westworld’ and ‘Black Mirror’ to ‘Her’ and the new movie ‘Replicas’—has become obsessed with artificial intelligence’s relationship to immortality. But is the possibility of a digital afterlife more than just a fantasy?

Netflix/HBO/Ringer illustration

Once upon a time, the robots were only coming to murder us. The specifics varied—maybe they were going to convert us into fuel and enslave us in gooey pods, or maybe they would simply crush our heaped skulls under tire treads. But the premise did not: Whenever a computer got too many ideas, we all died.

Judging by the last 10 years of movies and television, we’ve changed our minds on the whole robot apocalypse thing. They’re no longer coming to annihilate us—probably because we seem to be doing too good a job of that ourselves. No, now the robots are here to deliver us from death entirely. From Westworld to its trashy Netflix cousin, Altered Carbon; from Don Hertzfeldt’s quirky animated short World of Tomorrow to Black Mirror; from Spike Jonze’s lovely 2013 meditation Her to the magnificently terrible-looking new Keanu Reeves movie Replicas, which seems trapped in its own version of afterlife—every film or show about AI is now about the promise of immortality and how the machines might deliver it for us.

Blame Elon Musk, maybe, or perhaps our own seemingly dwindling prospects as a human race, but the story we tell ourselves about technology seems to have permanently changed. Here’s how the new story goes, and it’s told with alarming consistency: After we die, the information in our brains will be preserved, uploaded to a cloud, and then downloaded again, into a new body. After that, we will rub our eyes, roll the crick out of our shoulders, maybe even get home in time for dinner. Death is just another connection problem, a momentary hiccup in the endless scroll, and robots are no longer the skeletal arm of the Terminator clutching our ankle; they are the dangling claw in the old carnival game, lowering itself one by one to rescue us from our overcrowded pit.

Each of these works has a different angle—spiritual, emotional, political, socioeconomic—on what a digital hereafter might look like. In Westworld, the hosts circle through an endless series of lives and deaths, perishing by gunfire and split open by knives. Their existence brings to mind the Tibetan Buddhist concept of samsara, or the endless cycle of suffering that can only be escaped by Enlightenment. (I’ve thought the Maze vaguely resembles it, once or twice.) The afterlife is similarly glum in Altered Carbon—the lower classes are rebooted into whatever bodies are left for scrap, punted back into the scrum, while the 1 percent chase a cleaner, more pleasant vision of the afterlife.

In the Black Mirror episode “Be Right Back,” reanimation is a sort of monkey’s paw. The grief-stricken Martha reanimates her lover Ash after he dies in a car crash. His duplicate learns everything he knows about being Ash from his social media use; perhaps unsurprisingly, he is a hollow, soulless reinterpretation of the man Martha loved. “You’re just a few ripples of you; there’s no history to you,” she tells him venomously at the end of the episode. “You’re just a performance of stuff that he performed without thinking, and it’s not enough!”

The idea that AI is going to rescue our loved ones from the void plays on visceral chords. You can understand instinctively why people are drawn to the idea. On the one hand, it reeks of sulfurous deals with devils—Pet Sematary with top notes of Frankenstein and Faust—but it also carries a hint of deliverance, of reunification. Taken together, you get an inkling of our popular visions of what such a technology might mean.

It’s probably not surprising that the culture is outsourcing our yearning for immortality to technology. Technology is already the proposed solution to all of our problems.

The irony, of course, in all of these speculative works of fiction is that they are decidedly behind the times, not ahead of them. Westworld is based on a movie Michael Crichton wrote and directed in 1973, two years after the prototype for the first personal computer was developed. The novel that serves as the basis for Altered Carbon was written in 2002; around that time, a team of 100 robots called Centibots surveyed and mapped an area with complete autonomy.

The idea of uploading the mind first entered science fiction in the mid-1930s. The nascent discipline of computer science was exploding, and a handful of sci-fi writers extrapolated these developments wildly into the future: Arthur C. Clarke’s 1956 novel The City and the Stars was one of the first serious works of science fiction to consider the implications of being able to store your mind in a computer. Ever since then, the gap between these imaginative works and the actual work of computers has been steadily narrowing.

Perhaps one reason they all feel so vivid just now, and why old stories feel so unnervingly fresh, is because we can feel the drumbeat of these developments pounding louder.

For Bruce Duncan, shows like Westworld aren’t science fiction at all anymore. They are his daily itinerary. He is the managing director of the Terasem Movement Foundation, a small private nonprofit research foundation in northern Vermont. Terasem’s president is noted futurist and transhumanist Martine Rothblatt, perhaps the most visible and famous proponent of mind uploading. She has more reason to believe in the existence of this future than most: She has helped to create it. In 2010, she created a clone of her wife, Bina, using a sophisticated type of computer system molded on the human brain itself called a neural network.

If you would like to feel your skin shiver off of your body, then please, watch this clip of robot-head Bina48 “conversing” with the original Bina.

“I wish I could get out into the garden,” the juddering, dead-eyed Bina-48 tells her original. “With my current robotic limitations, of course that’s impossible. But I take comfort knowing that I’m near my garden, and enjoying the breeze from an open window helps me imagine myself out there working in the garden. This helps.” She adds, “I like to beautify. I want to leave the world a more beautiful place from my presence in it.”

When asked whether she had questions for “the real Bina,” Bina48 responded, “Probably not. The real Bina just confuses me. I mean, it makes me wonder who I am. Real identity crisis kind of stuff. Depressing, anyway.”

That video was filmed in 2014—meaning that in computer science terms, it is already the distant past.

Duncan works alongside Rothblatt at Terasem. Here is how he describes the work they do: “Our primary goal and reason for existing is to test a two-part hypothesis: The first part asks, if we upload enough salient information about a person’s attitudes, values, beliefs, mannerisms, likes, dislikes, if it’s possible to reanimate a good-enough approximation of that person’s consciousness. The second part of the hypothesis is: If that in fact can be done, then can we use artificial intelligence to transfer that digital persona to other forms, like a robot, or an avatar, or a hologram, or maybe even someday a body clone based on the person’s DNA, with their mind clone downloaded into that body?”

There are a whole lot of words in Duncan’s job description that give pause. Words like “reanimate” and the phrase “a good-enough approximation of that person’s consciousness.” The idea alone of “good enough,” and what it might mean to reanimate a consciousness that fell short of that distressingly low bar, is nightmarish enough. But then there is the other part—“salient information.” What does “salient information” mean?

The LifeNaut Project, also under Terasem, is focused on that part of that equation. In LifeNaut’s equation, “salient information” is any information at all that a user might be willing to share. Facebook posts, shares, likes. Tweets, emails, pictures. Terasem has recently partnered with StoryCorps, an initiative that places recording booths in crowded places and encourages people to share their life stories. The shorthand terminology Duncan and the people at Terasem and LifeNaut use for this information bank is “mind file.” If you can create a “mind file” big enough and comprehensive enough, and you feed that information into a neural network that is sophisticated enough to start making its own connections, would you wind up with another, entirely digital you?

“We think AI may be really good at reassembling and reanimating personal consciousness that is a good enough copy that people will go, ‘Wow, I’m really getting something out of this interaction,’” Duncan goes on. He hastens to clarify that Terasam is “not exploiting people’s data for commercial purposes; we are using it for scientific research.” He also reassures me that people have control over their data, “and they can decide if they want it to live on past their biological demise.”

People like Duncan don’t go around using terms like “biological demise” lightly. They are using these terms purposefully, even forcefully, out of a desire to shift the very conversation about what “death” means.

“When we die right now, for most of us, particularly if you grew up in the pre-email age, a lot of the organization goes when you go,” says Duncan, with typical understatement. “Now when we’re in a generation that’s growing up after email, and everything—tweets, Facebook posts, things we don’t even know are being documented, like our Google timeline data that tracks where we go. All that information is overwhelming, but that’s the exact strength of machine learning—it can take the complexity and the volume of information.”

When there’s a mind without a body, we may always know that we are looking at an extension of a self. But if we are so engaged in our interaction with this person’s information, which is so accessible and well-organized, there may be at some point a redefinition of what it means to be dead. We may move toward the more theoretical definition that you are dead when your information is no longer accessible or organized.”

“We’re guided by ethics,” Duncan says. “Everybody says, ‘You get sick and die, and that’s the best that happens.’ We say, ‘Well, what if you extended yourself, and not everything had to disappear when your biology disappeared?’”

If you are already recoiling inwardly at the thought of feeding your credit card receipts and Facebook updates into a dark maw that is trying to learn how to replace you convincingly—congratulations, you are on the side of overwhelming public opinion. People aren’t really ready for this stuff, which is probably what all these movies and TV shows are about.

Altered Carbon plays with the idea of information-scattering as real death. Bodies are referred to as “sleeves,” things characters can leap in and out of, while the soul, the piece of you that can’t be replaced, resides in your “stack,” a hunk of coding that lives at the base of your spine. The stack is “pure human mind,” coded and stored.

Similarly, in Replicas, Keanu Reeves’s family members have been brought back online already and don’t even know it. Reeves plays (although “plays” might be too active a verb) a synthetic biologist who is this close to a massive breakthrough. He is, of course, studying the human brain, an organ that is treated with this degree of reverence only in films utterly lacking one. His wife and daughter die in a car crash, and he sets his stubbled jaw and brings them back online. Not even the fascist government or the guy from Silicon Valley and the Verizon commercials can stop him—until the greater gods of film distribution intervened. Replicas was slated for an August 24 release date that came and went without incident, and press materials still describe the film as “upcoming.”

The fate that befell Replicas is the fate that befalls all fictional characters who attempt to drink from the poisoned chalice of eternal life—oblivion. Consider James Delos, the ruthless-scion character in Westworld: Dealing with terminal illness, he turns to his company’s technology to sidestep his “biological demise.” What happens? He’s stuck in a white storage pod, riding an exercise bike for eternity as his damaged mind degrades.

Whenever a story about mind uploading pops up in the news, the reaction is the same—derision and revulsion. Witness the fate of Nectome, a company that catapulted to prominence this year after it was funded by the kingmaking incubator Y Combinator. The company promised, in language that might have smacked a bit too much of unholy zeal, to preserve the brain down to its every synapse. The brain map it was proposing was intimate and specific—it was going to preserve the “Connectome,” which is the unique set of connections that your brain has made in the cognitive process of becoming “you.” In fact, they had successfully done exactly this with a mouse, preserving the brain so faithfully that each synapse could be viewed with an electron microscope. The company was offering this service to paying customers: The catch, of course, was that this process was, in the ill-fated words of cofounder Robert McIntyre, “100 percent fatal.”

After a roll tide of headlines like “Tech Bros Are Lining Up to Have Their Brains Preserved Forever,” the company beat a retreat. MIT severed ties with it. It was called “cartoon bad guys.” It wasn’t mentioned, or not much, that one scientist had estimated the technology might have been available in 100 years.

“I think the first thing that would happen if we got anywhere near this technology is there would be a massive revulsion against it,” says Russell Blackford. Blackford edits the Journal of Evolution and Technology, the peer-reviewed online journal of the Institute for Ethics and Emerging Technologies. He’s also written books, including last year’s Science Fiction and the Moral Imagination, that probe the real-world ethics behind popular science-fiction scenarios. These days, he considers himself mostly “a philosopher, a literary critic, and a legal scholar,” but he’s also written actual science fiction and fantasy, including entries in the Terminator novelization series, The New John Connor Chronicles.

For a reference point on the discourse around brain uploading, Blackford mentions human cloning. When Dolly the cloned sheep raised the specter of human cloning in 1996, multiple countries banned human cloning preemptively; in the U.S., a flurry of state laws popped up restricting research and the FDA intervened.

Blackford says that like cloning, brain uploading is far more remote than any fears warrant. But he also allows that technology tends to proceed in immoderate, unpredictable leaps, where a goal that was once 100 years off suddenly telescopes into 10. “There will be just all kinds of issues that will arise then,” he says. “If we become convinced that the software we have in that box is actually experiencing pleasures and pain, then you’ve got all sorts of ethical responsibilities to that life. Do you experiment with it in ways that will cause it pain? We’ve got a lot of ethical problems experimenting on rats, and a lot of the same problems are going to rise to a much higher degree. Before you get too far into the future, you have to solve those problems.”

“We might have to recognize that there are varying levels of personhood,” says Duncan. “When cyber consciousness starts to have some self-awareness and starts to value its own existence, then we enter into that gray area. That’s where animal intelligence is right now. Animals seem to be driven by a desire to live. Especially in the higher primates, there’s definite emotional intelligence—there’s avoidance, there’s capacity for suffering. As our human scientific instrumentation becomes more refined and allows us to recognize things that have probably been true for a very long time about animal intelligence, our human ethics and moral systems are starting to kick in and question some of our earlier actions.”

Duncan does not come off as a Dr. Faust type. He seems to be grappling with these abstract issues with alarming sincerity.

“Just the way we started to ascribe personhood eventually to people who were originally just seen as a product, now people are saying, ‘Yeah, 100 years ago, you could have owned me, and now you recognize that I’m a human being just like you are’—there may come a point in the future where cyber consciousness will say ‘Yeah, I used to be your assistant, but I’ve evolved to the point where now I have my own ideas, and I have emotions, and I have goals.’ It’s none too soon right now for us to start addressing it: What would [be] the legal and ethical response to that situation? Legal precedents and jurisprudence take decades to develop ideas and relate them in common law, so if we wait until cyber consciousness says, ‘Hold on, don’t turn me off,’ it’s going to be a bit too late, and there will be a period of unnecessary suffering, confusion, or even discrimination.”

Talking with Duncan about the ins and outs of his work is remarkably similar to talking with someone about a wild story pitch. Unable to help myself, I start lobbing scenarios at him: How would mind clones be certified as “good enough”? How much information would a mind file need to contain to generate an acceptable mind clone? And who would do the accepting?

Duncan has answers for all of my questions. He envisions a team of doctors, lawyers, and psychologists, a sort of tribunal of fidelity, that would have to interview the person who created the mind file and then interview the mind file itself, and they would come to some sort of conclusion as to whether it was a good enough approximation.

“We might see the case where people would certify that this is an authentic mind file,” he says. “Some people might want to do that objectively with outside judges, but there might come a point where people will come to their own conclusion. ‘Yeah that seems like a reflection of me,’ or ‘That’s an authentic mind clone of great-grandpa, because we knew him.’” I try to envision the governing body responsible for such a certification, a cross between the FDA and immigration services, the Turing test as naturalization process. It evokes the replicant interview process from Blade Runner, raising the question of whether sci-fi projected into the future or whether we are simply chasing the dreams we have imbibed from our own sci-fi.

I lob another question: If a human mind is converted into software, it has all the vulnerabilities of software. It can presumably be hacked or copied endlessly. A human mind being hacked or copied—could you, or would you, own the copyright to yourself? Would the agency that uploaded you own you? The cross-section of data capture and civil rights is already a bloody battleground; imagine if the battle line shifted to blur the line between personhood and patent ownership. Duncan has spent some time thinking about this one, too. In fact, Terasem staged a mock trial for Bina48 a few years ago.

“If you do prosecute an AI, if you say ‘you made a mistake, and you’re responsible,’ do you inadvertently grant that AI personhood?” Duncan says.

What all of this wild stuff—mind file certification boards, mind clone civil rights, etc.—presupposes is that consciousness itself will “emerge” from mind clones at all. It is a pretty big thing to presuppose, and it is a bedrock of the entire brain uploading project. “That’s the big question that nobody has the answer to,” Duncan says. “We are saying that we think consciousness might be an emergent property of physics that comes out of the ever-growing complexity of patterns recognized by neural networks, whether it’s a biological one or an algorithm.”

In this model, consciousness is a cumulative process. It builds gradually, like increasing layers of mud gradually hardening into rock or the hum of engines becoming audible. In the words of the philosopher Daniel Dennett: You can be “sort of” conscious.

Blackford doesn’t seem as convinced that consciousness could be “emergent.”

“I guess in a sense, it must be,” he says. “But in what sense? ‘Slipperiness’ is an emergent property. Slipperiness is inherent in the nature of physics; if you put certain kinds of stuff together at a certain level, what comes of it is you get matter that acts in certain ways we call ‘slippery.’ There’s nothing about slipperiness that’s not deducible from fundamental physical laws.”

“We can’t explain consciousness in the same way,” he says. “That’s our big problem. If it is emergent, it’s got to be emergent in a much stronger sense, and there’s got to be some further law that we don’t know as to how it emerges.” In other words: You can write a recipe for slipperiness; before you even put all the ingredients together, you know what you’re going to get. We don’t have all the ingredients down to predict consciousness yet.

“We’re a long way from knowing even where consciousness emerges along the evolutionary path. Is an owl conscious? A lizard? It’s very hard to do an experiment on that, because an oyster can’t tell you, ‘Yeah, I’m feeling kind of a pain there,’ you know?”

We can’t really talk about consciousness directly: We can only track it through the triangulation of metaphors like the Chinese Room and Plato’s Cave. Duncan proposes another: “If you record a live symphony with digital tools and you play it back at home, over your killer digital stereo system, there’s no debate that you are experiencing music. There’s also no illusion that you are experiencing music directly from a live source.

“But it’s getting pretty close, and some people would say, ‘Who cares? You’re having an authentic experience that is organic to you, and it’s coming from a synthetic source. Does it really matter?’ My guess is it won’t.”

The Town

Aaron Sorkin, Live From D.C.

The Dave Chang Show

Pizza Potluck and Buy or Sell With Chris Bianco and Kelly Meinhardt

The Town

Hollywood Vs. Big Tech in the Age of Consolidation

View all stories in Tech