Why Technoscience Needs Fiction
By Teresa Heffernan, Series Editor of Social and Cultural Studies of Robots and AI.
As science was emerging as a discrete and soon to be dominant way of knowing and as the industrial revolution was transforming the English country-side, Thomas Love Peacock in his “Four Ages of Poetry” (1820) argued that poetry was increasingly useless and retrograde in the age of scientific invention: “A poet in our times is a semi-barbarian in a civilized community. He lives in the days that are past. His ideas, thoughts, feelings, associations, are all with barbarous manners, obsolete customs, and exploded superstitions. The march of his intellect is like that of a crab, backward.”
In the age of robotics and artificial intelligence this dismissal of fiction, and the humanities more generally, has only escalated as literature departments, often treated as relics of the past, exist on life support while think tanks like the well-funded Singularity University, founded by Peter Diamandis and Ray Kurzweil and located in Silicon Valley, thrive. This for-profit uncredited institution, sponsored by companies such as Google, Deloitte, and Genentech, says its mission is to teach people “to utilize accelerating technologies to address humanity's grand challenges.” Despite its declared interest in “humanity,” the Singularity University offers no courses in the humanities and culture—nothing, for instance, on literature, linguistics, history, art, classics, gender studies, music, cultural studies, postcolonialism or philosophy. Promising to catapult us into a shiny future full of instant fixes, the complicated terrain of thousands of years of culture is cast aside in favour of the truth and practicality of technoscience harnessed to corporate interests. Humanity’s “hardest problems”--social inequity, colonialism, war, genocide, climate change, pollution, water scarcity, dying oceans, mental health, superbugs, and disappearing species—this “university” promises can all be solved by “exponential” technology. These problems are never, it seems, about the paucity of the ethical imagination.
While fiction is often credited with inspiring or predicting technological inventions, when it comes to “serious” discussions about the future of robots and AI, fiction is reduced to cheerleading. The “truth” of technoscience, steered by corporate and military interests, takes over as AI and robotics engineers, computer scientists, and CEOs mine the rich array of “humanized” machines and artificial people that have populated literature. For instance, Amit Singhal, a software engineer and former vice-president at Google, wrote: “My dream Star Trek computer is becoming a reality, and it is far better than what I ever imagined.” So too, Cynthia Breazeal, director of the Personal Robots Group at the MIT Media Laboratory was inspired by R2D2 and C3PO from Star Wars, concluding that: “While emotional robots have been a thing of science fiction for decades, we are now finally getting to a point where these kinds of social robots will enter our households.” Two of the wealthiest and most powerful men in the world--Elon Musk and Jeff Bezos—also credit Star Trek for their companies, SpaceX and Blue Origin. Bezos announced at the 2016 Code Conference: “It has been a dream from the early days of sci-fi to have a computer to talk to, and that’s coming true.” The firm SciFutures hires fiction writers to use storytelling, defined as “data with soul,” as a way of accelerating and advertising “preferred” futures; its corporate clients include, among others, Ford, Visa, and Colgate. Yet this utilitarian and overly literal approach—the claim that fiction is coming true—shuts down the ethical potential of fiction.
Ursula K LeGuin, in her powerful speech at the National Book Awards (2014) that went viral, argued that what we need are people who can imagine “alternatives to how we live now, and can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine some real grounds for hope.” She died in January but her words about needing to get over our obsession with the latest technology grow more relevant by the day as we confront a host of new problems that have emerged from the blind investment in technoscience: from autonomous weapons and a new arms race to the erosion of democracy with the mining and selling of data, to the built-in prejudice of proprietary black box solutions that are marketed as objective to name a few. As a literary critic, I want to retain the critical edge that fiction has to offer. Robots were born in fiction: the 1920s play R.U.R. by Karel Čapek first used the term, derived from the Slavic term robota (forced laborer), to discuss the mechanization of humans under factory capitalism with its drive for efficiency. Fictional robots or talking computers are no more “real” than talking lions, clever rabbits, witches, demons or Captain Picard. From Greek mythology to Aesop’s Fables to Star Trek—literature has always been about exploring and negotiating what it means to be human, about who falls inside and outside that category, and about what sort of world we want to inhabit. The very nature of fiction calls for interpretation, it traffics in metaphor and metonymy, and it refuses to be rendered literal or forced into a singular future.
Percy Bysshe Shelley, responding to Peacock with his spirited “A Defence of Poetry” in 1821, wrote: “The cultivation of those sciences which have enlarged the limits of the empire of man over the external world, has, for want of the poetical faculty, proportionally circumscribed those of the internal world; and man, having enslaved the elements, remains himself a slave.” Shelley’s “Defence” might serve as a useful reminder of the limits of the reductive approach to fiction that seems to dominate. In the periods in history when calculation trumped imagination, Shelley argued, there was the greatest social inequality: the rich got richer and the poor got poorer as the society was torn between “anarchy and despotism.”
As we witness the rise of global despots, the displacement of humans by wars and climate change, the increasing concentration of wealth in the hands of a few, and the disregard for the planet and fellow species in a world motivated by profit, we cannot look to new technologies alone to solve these problems. The cultivation of an ethical imagination that Shelley promoted at the outset of the industrial revolution seems newly urgent. Machine learning and robotics have lots to offer but as these technologies impact all humans, other animals and the planet they cannot continue to operate in a silo. For the record, crabs don’t march backward they move sideways.
About the editor
Teresa Heffernan is Professor of English at St. Mary’s University in Halifax, NS, Canada and series editor for Social and Cultural Studies of Robots and AI