Author: Belinda Ongaro
When discussing Artificial Intelligence, or AI, the public has a tendency to envision the robots portrayed in science fiction books and films. Unfortunately, this can lead many people to anthropomorphize to an unrealistic extent. To gain a clearer understanding, we spoke with Daniel H. Wilson, author of the novel Robopocalypse. He holds a PhD in robotics from Carnegie Mellon University and lives in Portland, Oregon.
According to Wilson, even some of the most intelligent people are guilty of over-humanizing robots. Neil deGrasse Tyson, the astrophysicist and science communicator, recently tweeted, “Seems to me, as long as we don't program emotions into Robots, there's no reason to fear them taking over the world.”
“It’s just so wrong from the basic level,” Wilson says. “Emotions are chemical reactions that occur in human anatomy. What does it even mean to give a robot emotions?” Wilson says this would entail giving a robot “utility values” – an internal cost/value estimate for executing a task – linked with different outcomes. These “feelings” can be reflected but not actually experienced by the robot. Although they can be called emotions, they fundamentally differ from human emotions; where one is dependent on code the other relies on chemicals.
“Break that down for me in ones and zeroes. And guess what? It doesn’t translate,” Wilson says. Some of the most successful progress AI researchers have made is in the quick and accurate functioning of expert systems – computer applications that can do cut-and-dried tasks like performing diagnoses, making financial forecasts, and scheduling delivery routes.
Wilson says that the greatest achievements of AI are witnessed when robots execute tasks that humans can accomplish by the age of three. The most complicated aspects of natural intelligence are the emotional recognition, facial identification, and spatial awareness that our brains have taken millions of years to evolve. Real world applications, like self-driving cars, are derived from a fundamental understanding of these concepts. But the advances are gradual; “What you’re usually seeing is the synthesis of a lot of problems that have already been solved by a lot of other people all coming together,” Wilson says “It’s built over the years until you get something that captures everyone’s attention.”
Funding is the biggest roadblock to AI research, Wilson says. Beyond the money, it’s also a matter of allocation. When corporations can’t afford to do their own basic research, they turn instead to government-sponsored programs. Without a firm grasp of the basic science, we will never reach a point where marketable commercial applications can emerge. As basic research progresses, Wilson predicts that progress will accelerate. “The problems are hard to solve and they take time, but I don’t consider that a hindrance. That is science!”
Of course, mechanization of work has been expanding since the industrial revolution, obligating the work force to adapt as menial jobs are eliminated and new jobs are created. In Wilson’s opinion, the real fear arises when, “as human beings we’re watching our sphere of influence become diminished as robots march closer and closer, doing things we thought only we could do.” Some examples include composing music, writing news articles, holding telephone conversations, or verbally answering our everyday queries from the palm of our hand.
Wilson is certain that to the extent possible, robots will take our jobs, because the unavoidable pressure to minimize costs and maximize profits guarantees it. From here, what future can we expect to unfold? In a utopian outcome, the robots free us to lives of leisure. But Wilson also looks to history for a dystopian alternative: “The people who own giant corporations employ fewer and fewer people and make more and more money, until the middle class that no longer exists turns into a lower class and they rise up and cut everybody’s heads off. Just like they have throughout history whenever wealth and equality gets too out of proportion.”
Wilson’s novel, Robopocalypse, presents another dire outcome. When an incredibly advanced artificial intelligence named Archos recognizes his own existence, he develops an interest and utter lack of faith in mankind. In a horrifically deliberate and violent turn of events, Archos extends a virus to technology across the globe, from airplanes to elevators to children’s toys, giving rise to a robots-versus-humans conflict known as the New War.
Considering the implications of Wilson’s work, one wonders how long humanity can last, or rather, if we really need to last at all. In an NPR TED Radio Hour Podcast, “Do We Need Humans”, MIT professor of Social Studies of Science and Technology Sherry Turkle suggests that we are susceptible to the
dominance of technology when we are at our most vulnerable and that even our little devices have remarkable psychological power. Citing a recent Japanese invention, a seal-like robot designed to comfort the elderly, Turkle cautions that interactions with social robots are inherently deceptive: “What does it mean that we suggest chatting with an entity that doesn’t know what you’re saying?”
Turkle is dismayed at the notion of a future in which we are encouraged to live among robots; in her eyes, people have already grown too emotionally dependent on technology. She doesn’t propose that we halt research, however she does suggest that it’s time we use technology to reclaim our true human value and shape our lives for the better.
On the other hand, another speaker from the “Do We Need Humans?” podcast, Cynthia Breazeal, avidly supports robot social potential. As the director of the Personal Robots Group at the MIT Media Laboratory, she proposes that when robots today are used for purposes such as interactive Skyping, educating, and personal training, we can derive real benefits from social artificial intelligence technology. In these situations, she asks, “Does a social embodiment really matter?”
From Gibson’s “Neuromancer”, to modern examples of science fiction including Wilson’s own, the forewarnings are ever-present. Certain themes endure for a reason, and when it comes to advancing science and technology, science fiction reminds us that it is often crucial that we look before we leap.
“As people grow up in different societies with diverse technologies, expectations, and levels of optimism, we’re always looking ahead through our fiction,” Wilson says, “almost as a way to make sense of the world and get some handle on what may be coming down the road.“
Be sure to read Daniel Wilson’s most recent novel, Robogenesis, sequel to Robopocalypse.
Follow Daniel Wilson on Twitter - @danielwilsonpdx
Visit Wilson’s website - http://www.danielhwilson.com/
This feature was written under the guidance of science writing mentor Andrew Alden.