You could read a bunch of scientific papers, but all the best thinking on the subject has already been done in plays, books, movies and games.

Brian Santo, Senior editor, Test & Measurement / Components, Light Reading

November 24, 2016

9 Min Read
How to Speak Knowledgeably About the AI Threat

Should we be worried about artificial intelligence (AI)? This wouldn't be the first time (or the tenth, or the thousandth) that technologists proceed with whatever they're doing without giving adequate thought to the consequences. When it comes to examining the ramifications of technology, it's always instructive to turn to the creators of popular fiction, and nowhere is this more the case than in AI, where Isaac Asimov's Three Laws of Robotics are at least as well known among AI researchers as anything their peers have devised, if not more so.

I might write a profound essay on the threat (or lack thereof) of AI someday, but this is not it. This is a primer of AI in pop culture. It's a survey of some of the best things you can read, watch or do for entertainment, but are all recommended for the framework they provide for how to think about the potential threat in the real world.

The world has yet to agree on what precisely qualifies as thinking. When devising the AI test that bears his name, Alan Turing decided the question was unanswerable and simply jumped ahead. If a machine can convince most humans most of the time that it is human, he proposed, then it might as well be human.

Once the question is raised if machines can think, that immediately opens a series of adjunct questions. If they think, can they feel? If they can feel, does that mean they have desire? If so, does that imply an impetus to act on those desires, and if so, doesn't that imply will? If a machine can think and feel and have will, then what does it mean to be human?

All of these questions were raised in the post-industrial urtext on the subject, the play RUR (Rossum's Universal Robots), written in 1921. The RUR company starts cranking out flesh-and-blood artificial beings -- which playwright Karl Capek called robots -- that grow ever more capable and eventually kill all but one last human. That last human observes a pair of robots falling in love, and dubs them Adam and Eve. Life goes on, depending on your point of view.

In fiction, bringing the inert to life is almost always perilous. In the Book of Genesis, God breathes life into clay but comes to regret his actions, and raises a flood. He changes his mind again, but still. Tales of human hubris start in ancient mythologies and run through Mary Wollstencraft Shelley's Frankenstein, RUR, and on through thousands of subsequent works in print and on film, and most end in doom, at least for humans.

Pygmalion petitions Aphrodite to bring a statue of a women to life. The goddess does, and it all works out okay for Pygmalion. Possibly also for the statue.

Asimov was among the first great thinkers in science fiction, and he started out in Pygmalion mode in the earliest stories in I, Robot. He teases the threat posed by AI, discovering odd special cases in the application of Three Laws of Robotics, but everything generally works out. Yet as he progressed with the Robotic series (dozens of stories and five novels), people and robots alike end up sharing an existential dilemma -- what to do with humans who insist on doing harm to robots and each other?

In taking this direction, Asimov might have been influenced by Jack Williamson, a fellow Science Fiction Grandmaster much less well known outside genre circles, but still very much worth reading. Williamson countered Asimov's original I, Robot, with the chilling short With Folded Hands. An inventor creates "humanoids" that he charges with protecting humans, but the humanoids conclude that the worst threat to humans is themselves, to their inventor's horror.

Williamson pursues the notion in a follow-up novel, The Humanoids, in which our humanoid slaves become despotic overlords. Yet Williamson's pitch is a curveball, and he leads us to wonder if maybe the humanoids weren't right all along.

Dune, one of the most revered books in the genre, gets a mention here for its notable, deliberate lack of any AI. The backstory Frank Herbert devises for his Dune novels includes a long-ago war, the Butlerian Jihad, in which sentient machines and humans square off and humans (for once) prevail. The result is Mentats, humans who develop natural (more or less) supercomputer-like abilities. Brian Herbert (Frank's son) and his writing partner Kevin J. Anderson tell the story of the Butlerian Jihad in The Battle of Corrin.

Philip K. Dick was a singular writer for many reasons, including his constant probing of what qualifies as human. Dipping into nearly any of his works is rewarding, but his touchstone on the theme is his story Do Androids Dream of Electric Sheep?, which is well known as the inspiration for the film Bladerunner, which strays from its source, but in interesting ways.

Bladerunner posits a world of replicants, slaves who appear fully human visually yet still are distinguishable behaviorally; they are created with built-in mechanisms that cause them to die after a few short years. The protagonist, Decker, is skilled at identifying replicants, but we enter this world at a point where replicants are coming to behave so much like humans that Decker might no longer be able to tell the difference. The movie leaves viewers contending with the dilemma Turing left for us: Does an adequate simulacrum of humanity amount to humanity? In the film, a replicant's soliloquy (partially improvised!) on what it means to be alive, delivered just before dying, is moving by anyone's standards.

After Bladerunner, there were any number of ruminations on AI that are worth the time, though they don't do much more than ask the same questions in slightly different ways.

One of the crewmembers of Star Trek: The Next Generation, irritatingly called Data, is an android who throughout the series probes the nature of humanity. It's one of the most famous depictions of an AI struggling with humanity, and it's a decent enough show, but those explorations are more often than not throwaway subplots.

The reboot of Battlestar Galactica on TV goes over similar ground, but is notable for both better-than-average writing and also for more fully incorporating the point of view of the AIs -- called Cylons or, dismissively by some humans, "toasters."

Where Battlestar Galactica was an adventure with a crucial subplot concerning what is human and what is not, in the current Westworld that question appears intrinsic to the adventure. Westworld is savvy about pricking questions not only about intelligence and emotion and will, but also about the role that desire and memory and dreams play in what it means to be "human."

The 2015 film Ex Machina is yet another a feature-length representation of the Turing test -- in fact, it is a feature-length Turing test -- but a twist of sorts makes it still essential viewing. The robot, Ava, looks like a robot, but for her face. Yet the young man brought in to interview her comes to regard her as human, even though she indisputably does not look it. The director sets up the interviewer as the viewers' proxy in the film; the film consequently forces us as viewers to measure ourselves by our response to Ava. Do we, like the interviewer, believe she's human?

Form might or might not make a difference. Iain Banks's Culture series of novels feature sentient ships, which (who?) can certainly embody emotions and characteristics we recognize as human as they traverse a universe filled with a number of races, including humans, most of whom are content to remain mostly human. The books are remarkable in that they are set at a point where the questions about what AI is, and whether it's a threat, are settled. Ships' concerns are mostly their own, and they only interact with humans when they feel like it or when they feel they have to. Sometimes they interact through human proxies, and it's all okay.

A more recent set of novels by Ann Leckie similarly starts with the notion of sentient ships but re-injects the questions with another twist, by having them considered not from the standpoint of humans, but from the standpoint of the sentient ships. Could sentient machines become troubled about mistreating humans? The series starts with the 2013 novel Ancillary Justice, which earned a number of genre awards.

The 2013 film Her presents a situation and a set of questions I haven't come across before. The subject is Theodore, who falls in love with a Siri-like assistant named Samantha. For starters, it is rare -- perhaps singular -- to see a human character having an emotional relationship with another character that is completely disembodied. We watch Theodore as he falls in love with Samantha, and the movie asks us to take it on faith that Samantha falls in love with Theodore. But that gives rise to what appears to be a novel set of questions: if an AI can be human, but is still AI, doesn't that automatically make it much more than simply human? What does that mean for the AI? What does that mean for the humans emotionally interacting with those AIs?

Films and books are one thing, but no matter how deeply you engage with either, reading or viewing is still a completely passive experience. In many modern video games, players don't just react, they must make choices. Fallout 4 presents a situation in which synthetic humans -- synths -- look and behave just like humans. The central plot forces players to make a constant series of decisions about whether to treat synths as humans or not. There's something about making an active choice that makes the issue more visceral.

One last recommendation.

In fiction (as in life), we are asked again and again to consider that if something appears to be intelligence, if something appears to be emotion, if something appears to be will, maybe it might as well be. The subject of the question is almost always the machine, but it can cut both ways. Don't humans model emotions? Does that make them more or less human?

The film The Imitation Game depicts Alan Turing as he helps crack the German Enigma codes in World War II, but take the ambiguity in the title for the invitation it is to think about what imitation means in the context of Turing himself. At the top level, it might refer to Turing, as a gay man, trying to imitate heterosexual behavior. The film also depicts him experiencing difficulty relating to others on an emotional level (had he been born 40 years later, Turing might have earned a diagnosis of autism), and ultimately modeling -- imitating -- emotive behaviors.

What does it mean to be human if not all humans meet all the criteria many of us unconsciously assume apply?

— Brian Santo, Senior Editor, Components, T&M, Light Reading

About the Author(s)

Brian Santo

Senior editor, Test & Measurement / Components, Light Reading

Santo joined Light Reading on September 14, 2015, with a mission to turn the test & measurement and components sectors upside down and then see what falls out, photograph the debris and then write about it in a manner befitting his vast experience. That experience includes more than nine years at video and broadband industry publication CED, where he was editor-in-chief until May 2015. He previously worked as an analyst at SNL Kagan, as Technology Editor of Cable World and held various editorial roles at Electronic Engineering Times, IEEE Spectrum and Electronic News. Santo has also made and sold bedroom furniture, which is not directly relevant to his role at Light Reading but which has already earned him the nickname 'Cribmaster.'

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like