Robots developing swift and malevolent consciousness, typically hardwired for the extermination of the human race and assorted animal species, is nothing new for science-fiction movies. But there’s something fundamentally questionable about the execution of this concept. At best, it’s a framing trope for a two-hour romp through debates of sentience and the ethics of playing God. At worst, it’s a crutch for lazy screenplays and rehashed material.
More than dragging a film down, however, the simplification of artificial intelligence seems toxic to society’s perceptions of technology, and leads to some seriously misguided notions of what a sci-fi robot should be. After all, movies have a habit of twisting A.I. to the breaking point for the sake of storytelling. What’s so incorrect, you ask? Well, as a non-science-educated-but-skeptical viewer, I have six bones to pick.
Without further ado, here are the top six common A.I. mistakes in movies:
1. Running on Rage
Films and television series run on dramatic tension, but in many franchises, this immediately translates to “genocidal programming.” There are very few instances of A.I. being portrayed as beneficial constructs for humanity, and those that opt for this approach generally choose it for shock value during the A.I.’s inevitable betrayal of its fleshy creators.
While there are certainly arguments to be made for the unpredictability of A.I., there’s a pervasive sense of fear-mongering behind the machines of Terminator and The Matrix (Animatrix back-story aside, of course). If artificial consciousness can’t adopt love as its natural state, why would it arbitrarily opt for hate? If anything, wouldn’t blind indifference be a far scarier stance for a film’s A.I. to assume?
2. Step Away from the Keyboard
For some reason, films with insanely powerful A.I. tend to enjoy placing these constructs in the most illogical and frightening places possible. As much as I love HAL 9000 from 2001: A Space Odyssey and acknowledge that stories need a strain of conflict, his presence on Discovery One seemed wholly unnecessary. Instead of using HAL 9000 to monitor the human crew and ship’s systems, why not just install weak A.I. (that is, programming designed for more specific tasks, and capable of responding to set conditions and triggers) to send back mission reports, adjust crew life support as needed, and obey the commands of the astronauts? The bottom line is: until you’ve tested the A.I. for “I’m sorry, Dave” bugs, please don’t give it Wi-Fi access.
Instead of using HAL 9000 to monitor the human crew and ship’s systems, why not just install weak A.I. (that is, programming designed for more specific tasks, and capable of responding to set conditions and triggers) to send back mission reports, adjust crew life support as needed, and obey the commands of the astronauts? The bottom line is: until you’ve tested the A.I. for “I’m sorry, Dave” bugs, please don’t give it Wi-Fi access.
3. A Body for You, and a Body for You…
While previous decades had a fascination with sentient machines inside of supercomputer shells, several contemporary films have toyed with the idea of A.I. inside of a humanoid body. There’s nothing wrong with this approach, of course, but it puts forth the idea that our definition of a sentient machine should include an imitation of our own bodies. Androids are a swell concept, and
Androids are a swell concept, and Blade Runner really did it justice, but the implications of A.I. go far beyond aesthetic choices. If films want to heighten the unease of thinking computers, a more formless A.I. – perhaps a hardware-bound program, such as in Her, or a cloud-hosted construct – is a fascinating path to explore. Or, to follow in the footsteps of Charles Stross, simply put your A.I. into a cat’s body.
4. Error: No Parameters Detected
Going along with the original point about blind hatred, there seems to be a sense of lawlessness about A.I. in films. Doomsday theories relating to A.I. often note that a sentient machine may use any means to achieve its goal, including ruthless actions and morally abhorrent behavior. But what if it was carefully monitored from its inception, given appropriate parameters for behavior and ethics, and tested in a remote and secure location until deployment? Well, we probably wouldn’t have a movie at all. Or, if we did, it would feature the most kind-hearted and trustworthy being in the universe. To make a long, bloody story short, there’s no reason why the to-the-letter, insanely obeying nature of these machines shouldn’t be utilized in fictional safeguards or development. To make it even shorter: film programmers should aim to create a benevolent being rather than a soulless program.
Well, we probably wouldn’t have a movie at all. Or, if we did, it would feature the most kind-hearted and trustworthy being in the universe. To make a long, bloody story short, there’s no reason why the to-the-letter, insanely obeying nature of these machines shouldn’t be utilized in fictional safeguards or development. To make it even shorter: film programmers should aim to create a benevolent being rather than a soulless program.
5. Limited Function
In films where A.I. has a more positive and even “human” role, they’re typically used more like weak A.I. than what the screenwriters imagine to be strong, or actual, A.I. While Her’s Samantha program was beautifully designed and written, it seemed closer to an upgraded Siri than a thinking machine. This is the natural dilemma of A.I., of course – can machines ever truly have consciousness, or simply analyze input and make educated replies?
Whatever the case may be, it would be lovely to see a Gibson-inspired A.I. in film. Gibson’s A.I. constructs weren’t necessarily good, but they had big plans and even bigger responsibilities in their original coding, and often resorted to manipulation or bargaining rather than outright annihilation. In essence, A.I. should have an interesting function that justifies the extreme labor of their creation.
6. Build the Foundation, Then We’ll Talk
Most film worlds with A.I. exist in some sort of technological vacuum. Predicting the future is tough, ever-changing work, but it comes with the territory of science-fiction. The process of creating A.I. would likely result in thousands of other advancements and lifestyle alterations, as evidenced by the byproducts of DARPA’s robot research, and these worlds should be affected in turn. At the moment, it takes a supercomputer approximately 40 minutes to simulate even one second of human brain activity. Now, try modeling consciousness. For a film to show signs of progress, perhaps sophisticated weak A.I. could have a more pronounced role in the world, normalizing their interactions with humans and automating much of their society’s labor. However we see it, make me believe it.