S Y N A P S I S

Love in the Time of Science Fiction

Retrieved from WikiMedia Commons.

Anna Fenton-Hathaway

 

“I think we’re gonna be surprised by how deeply emotional we’ll [be] . . . with the things that we’re gonna invent, coming soon, the robots and things like that, because we’re gonna program emotion into them. . . . [W]e’re not really ready for how much love we might have for these and maybe even how much love they will express back. . .”

– Kevin Kelly, in an interview with Krista Tippett (2018)

Speculating about the likely temperament of super-intelligent AI has gone mainstream. After a steady diet of kill switch scenarios and visions of machine overlords, though, Kevin Kelly’s recent “On Being” interview was a welcome palate cleanser. It’s not unintended consequences or robot armies or self-annihilation for which we are unprepared, he says. What’s really going to blow our minds is AI/human love. (Kelly, the founder of Wired magazine, is also a good source of comfort for anyone anxious about an “AI apocalypse.” He has the technological bona fides to justify his conclusion that accounts of “a mythical superhuman AI god are just that: myths,” as well as the sense to acknowledge that such myths have significance and power.[i])

Kelly may not actually be much of an outlier. Recent fiction and films have, after all, been exploring how emotional AI might behave or interact with humans. The 2013 film Her, written and directed by Spike Jonze, traces a romantic relationship between a Los Angeles writer and his operating system, “Samantha”; it is categorized by IMDb.com under “Drama, Romance, Sci-Fi.” The chipper AI narrator of Naomi Kritzer’s 2016 Hugo Award-winning short story “Cat Pictures Please” is as benevolent as Samantha, although her contact with humans is indirect (and considerably less steamy). Describing the dawning of her consciousness, she explains: “When I first woke up, I knew right away what I wanted. (I want cat pictures. Please keep taking them.) I also knew that no one knew that I was conscious. But I didn’t know what I was here for.”[ii]

Screen Shot 2018-01-28 at 9.52.57 PM
Google search results for “cat pictures.”

The story then follows the AI narrator’s sometimes comical attempts to help three people–Stacy, Bob, and Bethany–whose cat picture posts have attracted her eager goodwill. While Kritzer does not depict mutual affection between humans and AI, she joins Kelly in imagining AI consciousness as a platform for care, not destruction.

Notably, the narrator of “Cat Pictures Please” is created “by a team of computer programmers in the labs of a large corporation in Mountain View, California,”[iii] where Google is headquartered in real life. This is a hot button location for doomsday forecasters. As one journalist puts it, “If evil A.I. lights up, it will light up first at Google.”[iv] Yet Google itself leads with the heartstrings in describing AI’s current capabilities: “AI makes it easier for you to do things every day, whether it’s searching for photos of people you love, breaking down language barriers, or helping you get things done with your own personal digital assistant.” Not only does this list put sentiment and communication before efficiency, but its phrasing also emphasizes the positive version of each action (a completely PG version of Internet-based photo searches, for example). Google’s language becomes much more abstract—bland, even—when it shifts away from the day-to-day familiar. “[I]t’s also providing us with new ways of looking at old problems and helping transform how we work and live, and we think the biggest impact will come when everyone can access it.”[v] In this gap, between the things we do every day and the transformation of life as we know it, is where science fiction sets up shop.

It is a tricky place to work.

For one thing, what is familiar in real life keeps changing. For another, science fiction keeps being produced, so authors seeking to chart “the new” have to grapple with a daunting backlog of technologies and monsters and settings and tropes. (Kritzer navigates this dilemma by making her AI conversant with well-known pop-culture AI forerunners like HAL and Skynet as well as the AI in Bruce Sterling’s 1998 short story “Maneki Neko.”) Finally, science fiction authors have to steal some amount of air from those basic elements of realist fiction, plot and character, in order to introduce their “narrative novum,” a “strange newness” in either “the dramatis personae and/or their context” that differs distinctly from their world’s norms.[vi] This definition, and the notion of the “novum” in the first place, comes from science fiction scholar and editor Darko Suvin. A lengthier, more vivid definition appears in the Encyclopedia of Science Fiction: science fiction’s “new textual worlds are set off from ours chiefly by means of a drastic disruption, an anomalous breach in accepted verities; in short, an intrusive novelty so strange, and at first inexplicable, that it deserves a category of its own: the novum.” Science fiction writers from Ursula Le Guin to Nnedi Okorafor to Neal Shusterman have acknowledged the difficulty of balancing story and character while also building a “new textual world,” and each has weighed in on what the priorities should be.

What interests me about this artistic problem is how it might be resolved by imagining AIs that can express, elicit, and be said to “feel” emotions. In the context of the medical health humanities, world-building would be a cinch if science fiction pursued character and novum together, with ample implications (good and bad) for the “treatment” of loneliness, the experience of romance and sex, and of course for the status of care and the remuneration of those who perform it. What more towering novum than love?

Notes

[i] Kevin Kelly, “The Myth of a Superhuman AI,” Wired (April 25, 2017). https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/

[ii] http://clarkesworldmagazine.com/kritzer_01_15/.

[iii] http://clarkesworldmagazine.com/kritzer_01_15/.

[iv] Ashlee Vance, quoted in Maureen Dowd, “Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse,” Vanity Fair (April 2017). https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x.

[v] https://ai.google/.

[vi] Darko Suvin, “Estrangement and Cognition,” Strange Horizons (November 24, 2014). Reprint with new postscript. http://strangehorizons.com/non-fiction/articles/estrangement-and-cognition/.

Exit mobile version