& cplSiteName &

How to Speak Knowledgeably About the AI Threat

Brian Santo
11/24/2016
50%
50%

Should we be worried about artificial intelligence (AI)? This wouldn't be the first time (or the tenth, or the thousandth) that technologists proceed with whatever they're doing without giving adequate thought to the consequences. When it comes to examining the ramifications of technology, it's always instructive to turn to the creators of popular fiction, and nowhere is this more the case than in AI, where Isaac Asimov's Three Laws of Robotics are at least as well known among AI researchers as anything their peers have devised, if not more so.

I might write a profound essay on the threat (or lack thereof) of AI someday, but this is not it. This is a primer of AI in pop culture. It's a survey of some of the best things you can read, watch or do for entertainment, but are all recommended for the framework they provide for how to think about the potential threat in the real world.

The world has yet to agree on what precisely qualifies as thinking. When devising the AI test that bears his name, Alan Turing decided the question was unanswerable and simply jumped ahead. If a machine can convince most humans most of the time that it is human, he proposed, then it might as well be human.

Once the question is raised if machines can think, that immediately opens a series of adjunct questions. If they think, can they feel? If they can feel, does that mean they have desire? If so, does that imply an impetus to act on those desires, and if so, doesn't that imply will? If a machine can think and feel and have will, then what does it mean to be human?

All of these questions were raised in the post-industrial urtext on the subject, the play RUR (Rossum's Universal Robots), written in 1921. The RUR company starts cranking out flesh-and-blood artificial beings -- which playwright Karl Capek called robots -- that grow ever more capable and eventually kill all but one last human. That last human observes a pair of robots falling in love, and dubs them Adam and Eve. Life goes on, depending on your point of view.

In fiction, bringing the inert to life is almost always perilous. In the Book of Genesis, God breathes life into clay but comes to regret his actions, and raises a flood. He changes his mind again, but still. Tales of human hubris start in ancient mythologies and run through Mary Wollstencraft Shelley's Frankenstein, RUR, and on through thousands of subsequent works in print and on film, and most end in doom, at least for humans.

Pygmalion petitions Aphrodite to bring a statue of a women to life. The goddess does, and it all works out okay for Pygmalion. Possibly also for the statue.

Asimov was among the first great thinkers in science fiction, and he started out in Pygmalion mode in the earliest stories in I, Robot. He teases the threat posed by AI, discovering odd special cases in the application of Three Laws of Robotics, but everything generally works out. Yet as he progressed with the Robotic series (dozens of stories and five novels), people and robots alike end up sharing an existential dilemma -- what to do with humans who insist on doing harm to robots and each other?

In taking this direction, Asimov might have been influenced by Jack Williamson, a fellow Science Fiction Grandmaster much less well known outside genre circles, but still very much worth reading. Williamson countered Asimov's original I, Robot, with the chilling short With Folded Hands. An inventor creates "humanoids" that he charges with protecting humans, but the humanoids conclude that the worst threat to humans is themselves, to their inventor's horror.

Williamson pursues the notion in a follow-up novel, The Humanoids, in which our humanoid slaves become despotic overlords. Yet Williamson's pitch is a curveball, and he leads us to wonder if maybe the humanoids weren't right all along.

Dune, one of the most revered books in the genre, gets a mention here for its notable, deliberate lack of any AI. The backstory Frank Herbert devises for his Dune novels includes a long-ago war, the Butlerian Jihad, in which sentient machines and humans square off and humans (for once) prevail. The result is Mentats, humans who develop natural (more or less) supercomputer-like abilities. Brian Herbert (Frank's son) and his writing partner Kevin J. Anderson tell the story of the Butlerian Jihad in The Battle of Corrin.

Philip K. Dick was a singular writer for many reasons, including his constant probing of what qualifies as human. Dipping into nearly any of his works is rewarding, but his touchstone on the theme is his story Do Androids Dream of Electric Sheep?, which is well known as the inspiration for the film Bladerunner, which strays from its source, but in interesting ways.

Bladerunner posits a world of replicants, slaves who appear fully human visually yet still are distinguishable behaviorally; they are created with built-in mechanisms that cause them to die after a few short years. The protagonist, Decker, is skilled at identifying replicants, but we enter this world at a point where replicants are coming to behave so much like humans that Decker might no longer be able to tell the difference. The movie leaves viewers contending with the dilemma Turing left for us: Does an adequate simulacrum of humanity amount to humanity? In the film, a replicant's soliloquy (partially improvised!) on what it means to be alive, delivered just before dying, is moving by anyone's standards.

After Bladerunner, there were any number of ruminations on AI that are worth the time, though they don't do much more than ask the same questions in slightly different ways.

One of the crewmembers of Star Trek: The Next Generation, irritatingly called Data, is an android who throughout the series probes the nature of humanity. It's one of the most famous depictions of an AI struggling with humanity, and it's a decent enough show, but those explorations are more often than not throwaway subplots.

The reboot of Battlestar Galactica on TV goes over similar ground, but is notable for both better-than-average writing and also for more fully incorporating the point of view of the AIs -- called Cylons or, dismissively by some humans, "toasters."

Where Battlestar Galactica was an adventure with a crucial subplot concerning what is human and what is not, in the current Westworld that question appears intrinsic to the adventure. Westworld is savvy about pricking questions not only about intelligence and emotion and will, but also about the role that desire and memory and dreams play in what it means to be "human."

The 2015 film Ex Machina is yet another a feature-length representation of the Turing test -- in fact, it is a feature-length Turing test -- but a twist of sorts makes it still essential viewing. The robot, Ava, looks like a robot, but for her face. Yet the young man brought in to interview her comes to regard her as human, even though she indisputably does not look it. The director sets up the interviewer as the viewers' proxy in the film; the film consequently forces us as viewers to measure ourselves by our response to Ava. Do we, like the interviewer, believe she's human?

Form might or might not make a difference. Iain Banks's Culture series of novels feature sentient ships, which (who?) can certainly embody emotions and characteristics we recognize as human as they traverse a universe filled with a number of races, including humans, most of whom are content to remain mostly human. The books are remarkable in that they are set at a point where the questions about what AI is, and whether it's a threat, are settled. Ships' concerns are mostly their own, and they only interact with humans when they feel like it or when they feel they have to. Sometimes they interact through human proxies, and it's all okay.

A more recent set of novels by Ann Leckie similarly starts with the notion of sentient ships but re-injects the questions with another twist, by having them considered not from the standpoint of humans, but from the standpoint of the sentient ships. Could sentient machines become troubled about mistreating humans? The series starts with the 2013 novel Ancillary Justice, which earned a number of genre awards.

The 2013 film Her presents a situation and a set of questions I haven't come across before. The subject is Theodore, who falls in love with a Siri-like assistant named Samantha. For starters, it is rare -- perhaps singular -- to see a human character having an emotional relationship with another character that is completely disembodied. We watch Theodore as he falls in love with Samantha, and the movie asks us to take it on faith that Samantha falls in love with Theodore. But that gives rise to what appears to be a novel set of questions: if an AI can be human, but is still AI, doesn't that automatically make it much more than simply human? What does that mean for the AI? What does that mean for the humans emotionally interacting with those AIs?

Films and books are one thing, but no matter how deeply you engage with either, reading or viewing is still a completely passive experience. In many modern video games, players don't just react, they must make choices. Fallout 4 presents a situation in which synthetic humans -- synths -- look and behave just like humans. The central plot forces players to make a constant series of decisions about whether to treat synths as humans or not. There's something about making an active choice that makes the issue more visceral.

One last recommendation.

In fiction (as in life), we are asked again and again to consider that if something appears to be intelligence, if something appears to be emotion, if something appears to be will, maybe it might as well be. The subject of the question is almost always the machine, but it can cut both ways. Don't humans model emotions? Does that make them more or less human?

The film The Imitation Game depicts Alan Turing as he helps crack the German Enigma codes in World War II, but take the ambiguity in the title for the invitation it is to think about what imitation means in the context of Turing himself. At the top level, it might refer to Turing, as a gay man, trying to imitate heterosexual behavior. The film also depicts him experiencing difficulty relating to others on an emotional level (had he been born 40 years later, Turing might have earned a diagnosis of autism), and ultimately modeling -- imitating -- emotive behaviors.

What does it mean to be human if not all humans meet all the criteria many of us unconsciously assume apply?

— Brian Santo, Senior Editor, Components, T&M, Light Reading

(2)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Michelle
50%
50%
Michelle,
User Rank: Light Sabre
11/28/2016 | 11:07:30 PM
Re: As I see it...
@joe AI will rule. AI will burn down society. Opinon. NEWS!

What's the solution?
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Light Sabre
11/28/2016 | 8:07:56 PM
As I see it...
The problem, as I see it, is that the publications and outlets that are publishing op-eds about the dangers of AI are the very same publications and outlets that are also publishing op-eds and "news" about how great and how much safer and how much more efficient and how much better self-driving vehicles are.

The latter makes it very difficult to take the Asimovian sci-fi warnings seriously for the general populace, methinks.
More Blogs from Brianiac
The test/assurance crowd is missing in action when it comes to testing orchestration software – but whose fault is that?
The next G.fast plugfest will lead into the certification process for commercial products, which some service providers will start deploying shortly thereafter -- in just a few months' time.
Will test and measurement companies get infected with the M&A frenzy we've seen in other sectors?
You might think Amazon's Unlimited is just another me-too streaming music service. You'd be wrong. If successful, it will be a critical tool to help slice off a fat, juicy chunk of Google's $75 billion ad business.
Featured Video
From The Founder
The 'gleaming city on a hill,' Steve Saunders calls it. But who is going to take us from today's NFV componentry to the grand future of a self-driving network? Here's a look at the vendors hoping to make it happen.
Flash Poll
Upcoming Live Events
September 28, 2017, Denver, CO
October 18, 2017, Colorado Convention Center - Denver, CO
November 1, 2017, The Royal Garden Hotel
November 1, 2017, The Montcalm Marble Arch
November 2, 2017, 8 Northumberland Avenue, London, UK
November 2, 2017, 8 Northumberland Avenue – London
November 10, 2017, The Westin Times Square, New York, NY
November 30, 2017, The Westin Times Square
All Upcoming Live Events
Infographics
With the mobile ecosystem becoming increasingly vulnerable to security threats, AdaptiveMobile has laid out some of the key considerations for the wireless community.
Hot Topics
Could 5G Have Found Its Glass Ceiling?
Dan Jones, Mobile Editor, 9/20/2017
1 Million Pirate Set-Top Boxes Sold in the UK
Aditya Kishore, Practice Leader, Video Transformation, Telco Transformation, 9/20/2017
Comcast Shuts Down OTT Again
Mari Silbey, Senior Editor, Cable/Video, 9/19/2017
Why Amazon May Be Cable's Biggest Threat
Mari Silbey, Senior Editor, Cable/Video, 9/22/2017
Photo Highlights: Operations Transformation Forum 2017
Ray Le Maistre, International Group Editor, 9/17/2017
Animals with Phones
Live Digital Audio

Understanding the full experience of women in technology requires starting at the collegiate level (or sooner) and studying the technologies women are involved with, company cultures they're part of and personal experiences of individuals.

During this WiC radio show, we will talk with Nicole Engelbert, the director of Research & Analysis for Ovum Technology and a 23-year telecom industry veteran, about her experiences and perspectives on women in tech. Engelbert covers infrastructure, applications and industries for Ovum, but she is also involved in the research firm's higher education team and has helped colleges and universities globally leverage technology as a strategy for improving recruitment, retention and graduation performance.

She will share her unique insight into the collegiate level, where women pursuing engineering and STEM-related degrees is dwindling. Engelbert will also reveal new, original Ovum research on the topics of artificial intelligence, the Internet of Things, security and augmented reality, as well as discuss what each of those technologies might mean for women in our field. As always, we'll also leave plenty of time to answer all your questions live on the air and chat board.

Like Us on Facebook
Twitter Feed