Skip to main content

This Elon Musk-approved documentary focuses on the wrong bits of AI

This Elon Musk-approved documentary focuses on the wrong bits of AI

/

The challenges of artificial intelligence are huge, but they deserve better analysis than this

Share this story

Image: Papercut Films

This review previously ran in April to coincide with a special online release. It is being updated to coincide with the film’s return to theaters in limited release.

Do You Trust This Computer? is not a particularly subtle watch. The documentary, from filmmaker Chris Paine, is dedicated to the dangers of artificial intelligence, and while it didn’t make a splash in theaters, it was promoted enthusiastically by Elon Musk, who tweeted about the film and paid for it be streamed for free in early April. (Musk also appears in the documentary as a talking head.) It starts by bombarding viewers with quotations and whizzy graphics of phones and brains. “We have a networked intelligence that watches us, knows everything about us,” says one. “The change is coming and nobody can stop it,” says another. It feels more like a trailer for a bad science fiction movie than a documentary on AI. 

This is a shame, as the field of artificial intelligence desperately needs nuanced public discussion. Instead, Do You Trust This Computer? takes viewers on a whistle-stop tour of various AI-related topics, including job automation, autonomous weaponry, and self-driving cars, all illustrated with CGI robots and quotations from respected researchers. Paine, who previously directed the well-received 2006 documentary Who Killed the Electric Car?, is trying to give an ambitious overview of the threat and potential of AI, but he does so in the same way that satellite imagery provides a “good overview” of where you left your car keys. There’s just not enough detail to be useful.

Take, for example, the film’s discussion of superintelligence, the theory that animates many apocalyptic AI scenarios. The idea is that once we build a computer smarter than humans, its intelligence will grow exponentially, and it will become a grave threat to humanity. If we don’t program AI with proper morals, says the theory, it will eventually wipe us out through malice, carelessness, or plain indifference. Musk, who is also the film’s leading voice on superintelligence, warns that such a system would become “an immortal dictator from which we could never escape.”

This may be dreadfully exciting, like any fear-based action movie scenario, but it’s also an incomplete, misleading summary of what the AI community believes about this topic. Yes, many experts acknowledge the possible threat of superintelligence, but they’re quick to add that the technology we have right now is not able to create conscious machines and that AI could create many more pressing dangers to society, like algorithmic policing and automated surveillance.

Superintelligence shouldn’t be dismissed, but neither should it overshadow other concerns

And yes, people like Musk argue that the threat of superintelligence still deserves more attention because it’s existential (i.e., it has the potential to wipe out humanity). But this sort of calculation is useful primarily in an academic environment, where research into superintelligence has spurred plenty of useful work on AI safety. In the media, where attention is scarce and fleeting, scare tactics distort the debate and flatten the many nuances in the discussion of superintelligence.

It’s like ending a documentary on violence in cities by saying, “Forget about muggings, your neighbor could be making a nuclear bomb in their garage right now!” That may be technically true, but it’s not particularly helpful. No wonder scientists on Twitter have been less than flattering about Do You Trust This Computer?, describing it as “gratuitous fear-mongering” and a “really good comedy.”

It’s also worth noting that Do You Trust This Computer? suggests the solution is augmenting humans with AI so we don’t get “left behind.” Musk himself has founded a company based on this premise. And while there’s nothing wrong with Musk promoting something that supports his theories and financial interests, it suggests his enthusiasm is somewhat biased.

Do You Trust This Computer? uses a lot of impressive graphics, but is missing important details.
Do You Trust This Computer? uses a lot of impressive graphics, but is missing important details.
Image: Papercut Films

Do You Trust This Computer? does spend time on important issues. It discusses the possibility that job automation will lead to greater inequality, and alludes to the great abundance of data being collected about us by companies like Facebook and Google. (Although this has very little to do with artificial intelligence and everything to do with monopolies in the tech industry.) There’s also a particularly interesting section on autonomous weaponry, which makes the depressing but often-overlooked point that, despite our unease about machines making decisions on the battlefield, the expediencies of war will likely override ethical objections. In the documentary, political scientist P.W. Singer notes that unrestricted submarine warfare targeting freighters and tankers was thought unconscionable at the beginning of the 20th century, but became normalized after World War II. We’re in the middle of a similar transition over the ethics of drone combat, and autonomous weapons may follow the same path.

These sections suffer from the same shortcomings as the rest of the film: they’re too brief and too sensationalist. But the frustrating thing is what’s been omitted altogether. An incredible amount of important work is being done in AI right now exploring the ethical implications of integrating machine learning systems into society. These are weighty topics, like how biased data sets affect the decision-making algorithms used for criminal sentencing and job hiring. And though they are complex, they’re not difficult to communicate. When an algorithm developed by Google to filter online comments gives the statement “I am a gay black woman” a toxicity rating of 87 percent, even the most bombastic documentary makers should be able to express why poorly applied AI could be worrying.

The film ignores too many incredibly important worries

It’s also notable that while much of this important work is being done by women — people like Kate Crawford of the AI Now institute and Joy Buolamwini of the Algorithmic Justice League — the cast of talking heads in Do You Trust This Computer? is overwhelmingly male. Of the 26 experts featured in the film, 23 are men. As Microsoft AI researcher Timnit Gebru pointed out on Twitter, this is genuinely a “difficult feat to achieve,” considering the gender diversity in the field.

The film does nothing to dispel the impression it’s only interested in Important Men talking about Important Ideas with its persistent use of female members of the public to illustrate general naiveté about computers. Throughout the documentary, women and children are interviewed on the street, and their relaxed and informal comments — like, “Oh my god I trust my computer so much” — are consistently contrasted with the assured expertise of men. This (presumably unintentional) motif says more about society’s biases and AI than the documentary itself ever attempts to.

Do You Trust This Computer? is defensible in some ways. It’s engaging, imaginative, and easy to watch, and it brings attention to a subject that’s going to have real and important effects on all our lives. But it sacrifices too much complexity and detail to achieve this, and it’s more misleading than informative.

Paine anticipates this criticism. His dramatic opening sequence features a clip from Terminator 2, with a robot stepping on a human skull. And then Westworld co-creator Jonathan Nolan says the media and Hollywood have “fucked up” by “crying wolf enough times” to inoculate the public against a fear of AI. It’s different this time, Nolan assures us: the fear is real and present. Then the film starts, and the shouting begins. “Wolf, wolf, wolf!”