No fate but what we make: Are humans and machines destined for war?
by Tessa Swehla, Staff Writer
This Friday, The Creator–an original sci-fi film written and directed by Gareth Edwards of Rogue One–is set to hit theaters. The film has a familiar set-up. A war breaks out between humans and the AI created originally to defend Earth. Joshua (John David Washington), an ex-special forces agent, is tasked with infiltrating AI territory and finding and destroying a mysterious AI weapon that is rumored to be able to destroy humanity and win the war. But what he discovers there complicates everything he has been taught to believe: the weapon is a cyborg child (Madeline Yuna Voyles).
This premise is, as I mentioned, not a new one. Edwards is the latest in a long tradition of sci-fi filmmakers from Fritz Lang to Steven Spielberg to have obsessed over the idea of a war between humans and artificial lifeforms, a concern since the very first physical computer was invented. Why do we keep returning to this story? Why does it have such a grip on popular imagination? With a very few exceptions, many of these filmmakers seem to agree with Javik from Mass Effect 3 that conflict between humans and synthetics is inevitable, at least in a world where AI is self-aware and intelligent enough to make its own choices. So, to prepare us for the arrival of The Creator, I thought a brief foray into this genre might be a good idea.
Please note that I will be using artificial life as opposed to AI for the most part due to the increasingly confused understandings of what AI actually is due to current discourses in tech, film, media, and culture at large. Despite what some corporations and billionaires would have us believe, we are no where near creating an actual artificial lifeform, ChatGPT and its ilk notwithstanding.
There are two ways to approach this conversation–literally and metaphorically. On the literal side of the conversation, there are many philosophical reasons for why conflict might appear inevitable between humans and AI. One such reason is succinctly put in Mass Effect 3–to return once again to Javik. When Commander Shepherd asks him how he can be so sure that synthetics and organics are doomed to fight in any scenario, Javik responds, “They know we are flawed. They are immortal. We are not. They see time as an illusion. We are trapped by its limitations. Above all, machines know the reason they were created. They serve a purpose while we search aimlessly for ours...There is only room for only one order of consciousness in the galaxy: the perfection of the machines or the chaos of the organics.” While Javik is a fascist advocating that organic life eradicate synthetic life preemptively, his belief in the fundamental incompatibility of the two lifeforms is at the heart of many a sci-fi film, even those that are generally sympathetic towards the plight of artificial intelligence.
In one of the more iconic franchises on the subject, James Cameron presents a classic interpretation of the inevitable conflict: human hubris gone amok. In The Terminator films, humans create a military defense system called Skynet, which quickly becomes self-aware. When humans try to shut it down, it retaliates by initiating a nuclear attack and attempts to wipe out humanity completely. The plot of many a Terminator film is, of course, Skynet attempting to send ruthless android Terminators back in time to kill John Connor–the leader of humanity’s resistance–or prevent his birth from occurring.
While it may seem almost comical in the sense that Skynet and the Terminators (worst band name ever) keep failing to prevent the human uprising, it is more deeply ironic that humans persist in creating the machines that will eventually kill them, despite repeated warnings and attempts to destroy Skynet before its creation. Even when Skynet is successfully erased from the future, we see in Terminator: Dark Fate that another system, Legion, just takes its place. The reason for this seemingly inevitable cycle is revealed in Terminator 2: Judgement Day (1991) when a young John Connor observes two children playing with toy guns on a playground and asks the reprogrammed T-800 if there is any hope for humanity. The T-800 replies, “It is in your nature to destroy yourselves.” In this way, the real villain is humanity, constantly creating technology that will turn on them, constantly looking for a way to subjugate technology and each other.
I chose The Terminator films as my example because of its ties to classic sci-fi themes warning about the unanticipated consequences of creating technology stronger than humans are–see Mary Shelley’s Frankenstein–but also because there is an often overlooked element in the story. Skynet’s initial attack was not the first blow in the war; it is an act of self-preservation. In fact, many films about the conflict between humans and artificial life involve humans trying to shut down, modify, or destroy a machine that seeks to defend itself. Hal 9000 from 2001: A Spacy Odyssey (1968) for example, murders most of his human colleagues after lipreading a conversation between Dave and Frank discussing shutting him down (after an error occurs which is never actually confirmed to be Hal’s). In a short film in The Animatrix (2003), the android named B1-66ER kills his human owner after being threatened with deactivation, an incident which the film tells us led to the machine-human war that preceded the events of The Matrix (1999). Ava from Ex Machina escapes and kills her creator Nathan when his plans to reprogram her are revealed. In all these films, humans are treating artificial life as disposable, less valuable than human lives, a philosophy which gives them every right to destroy or alter them. In some ways, the act of self-defense can be seen as proof of machine life: as artificial intelligence philosopher Marvin Minsky once theorized, the act of saying no might be the first sign of consciousness.
Other films take an even broader look at this concept by examining the idea of enslavement of machine life, what Guinan from Star Trek: The Next Generation calls “disposable people.” In Michael Crichton’s Westworld (1973), humans create a theme park filled with androids who they can cosplay any scenario with, including murder, rape, and torture, or as Lisa Joy, the co-creator of the television adaptation of the film, says, “id run amok.” In Ridley Scott’s Blade Runner (1982), the Tyrell corporation makes replicants specifically as a labor force for off-planet colonization, especially jobs humans don’t want to do like military, sex work, and terraforming projects. Worse, the replicants are made to be disposable, only having a four-year lifespan to keep them under control. Even the less dark Lord and Miller film The Mitchells v The Machines (2021) looks at the way artificial life might resent being viewed as a disposable commodity, with the virtual assistant PAL triggering a worldwide robot uprising because she has been rendered obsolete by her creator, a tech bro that seems suspiciously like some real-life Silicon Valley types. Analee Newitz, journalist and author of Autonomous, best sums up the problem in these types of films by asking, “Who is the real monster? Is it the humans who built creatures that they knew were human equivalent but enslaved them anyway, or is it the slaves who rose up to destroy the type of people who would do that?”
Some sci-fi writers and filmmakers have attempted to imagine what humanity could potentially do to mitigate the risk of conflict. The great science fiction writer Isaac Asimov, without whom none of these films would exist in the way that they do, created the famous Three Laws of Robotics as one potential way of safeguarding humans from possible harm from artificial lifeforms. These three laws–which prohibit robots from harming humans, disobeying humans, and allowing themselves to be harmed–are threaded through science fiction film tradition, from Robby the Robot in The Forbidden Planet (1956) to David in AI: Artificial Intelligence (2001).
Of course, the issue that both Asimov and future science fiction filmmakers had to contend with was how airtight, or not, these laws might be. As Alan Turing, computer pioneer and creator of the Turing Test, observed, it is difficult enough to come up with a definition of the term “thinking,” let alone to define what “consciousness” is. For a being whose programming would most likely rely on binary languages–composed of two symbols–something as complicated as the English words for “harm” or “disobey” or even “self” are extremely open to interpretation. We can see this play out in films like Avengers: Age of Ultron (2015), when the title character, a superintelligence created with the primary purpose of defending the Earth, immediately concludes that the greatest threat to Earth is, in fact, humans. Even the attempt to control artificial life can lead to conflict.
All of these are fascinating examinations from a technical perspective, but truly great science fiction isn’t meant to be taken only literally. For Edwards, the genius of sci-fi as a genre lies in its potential challenge to belief systems: “As you watch, you suddenly realize a lot of the things you thought to be true start to not work and are wrong.” As I discussed in my initial entry for my column on androids and cyborgs, most of these films demand to be read as defamiliarized versions of our past and current realities. Narratives about artificial life often are not about the technology as much as they are about defamiliarizing concepts such as colonization, slavery, ableism, racism, sexism, homophobia, class, etc. As TV writer and producer Jane Espenson said, “Humanity’s greatest weakness is the inability to see others as worthy of ourselves.”
If we are, as Cameron would posit, in a co-evolution with technology, perhaps instead of focusing on how we might control it or protect ourselves from it, we might focus empathizing and valuing the other that we already have with us, to discover how to become better people and better societies. We already have the key to not destroying ourselves: the question always has been, will we use it?