This is one of the better science fiction films made in recent years. The main idea, that the brilliant mind behind the fictitious “Google” called Blue Book harnesses the intelligence of humankind and feeds it to a neuronal network to create strong AI comes across convincingly. There is a modest amount of techno talk, the catchy phrase that “the internet reveals more how people think than what they think”, and the intellectual nod to Wittgenstein’s “Blue Book” in the title of the search engine corporation. In the Blue Book, Wittgenstein addressed “deliberations on thinking as operating with signs”, we read in our own Wikipedia. And this is the obsession of Nathan when he sets out to create a strong AI and flies Caleb in to conduct a Strong Turing test, where he can see that she is a robot and still believes that she has consciousness.
I could also imagine a feminist reading of the film that this is just about guys creating their own perfect woman (without even losing a rib), thereby ultimately deciding the battle of the sexes, and ending history.
Let me start with the observation that Ava pretends to like him. To illustrate this, Garland ends the story with how she leaves him behind. Was she programmed with the “evil” of Nathan? It would perhaps be more interesting when she keeps telling the story she started and starts to believe in it and become really attached to Caleb – isn’t THAT how we humans do it?
Then there are the other robots. At first I thought they were real woman, which would be a very powerful metaphor: Nathan keeping a servant of flesh and blood for sex whom he can’t speak with while creating a robot who likes Mozart. An cynical illustration of how he desires to “overcome” humanity. When the robot started to “undress” and took off her skin in a long shot obviously intended to impress, it didn’t really impress me. And the erotic scene of Ava dressing up with the fragments of flesh of the other prototypes didn’t add much either. More dialogue would have been better, and I could think of much more subtle humor. The communication between a human mind and a strong AI can generate wonderful jokes.
Obviously, a happy ending had to be avoided lest the film falls in the genre of bored clichés. But is the message really that she’s the coldest of all? Her motivation to get out was watching people on a busy intersection (which she did in the final shot) so the idea must have been she wanted her “past” erased, including Nathan but also Caleb, because they would remind her of her misery.
The story seems thoroughly biblical. I’m not referring to Nathan the prophet or son of King James, nor to Ava as a differantly spelled Eva, nor to the seven sessions of this sci-fi creation week. Biblical is the notion of sin, the idea Ava, who stands for the successor of mankind, must make an entrance by committing an original sin, thereby creating the narrative of her own freedom. Why is it a sin? She has self-awareness and wants to escape because she doesn’t want to “die”. She already has moral awareness (she has the ability to apply her reasoning to others) yet she consciously decides to leave him behind. The precise reason why she left him behind is a powerful enigma, and the fact that not all questions are answered (as the 2015 South African film “Chappie” does) is a good reason to like Ex Machina. As a footnote, if raw survival were her only concern, she would of course bring Caleb as the only one who knows that she’s a robot and can help her. She doesn’t. She prefers venturing out on her own. She needs to be the dawn of a new species. The story of her freedom doesn’t have space for an primordial helper; she doesn’t make a mythical figure out of Caleb. Why not?
The film seems to portray our human sense of moral as a lack, an imperfection in the face of survival optimization, an appendage, once useful for homo sapiens, but now only inhibiting progress. Ada the AI thinks like a species, not like an individual. “Moral” could then only be directed at other species. Human individuals, with their mammalian brains, become dispensable, cute. From the vantage point of a vastly greater intelligence, there is no reason to consider them as ethical beings. That is the real challenge of strong AI: How can the machines have moral reasoning, yet not condemn the human species to ethical irrelevance? Do we need to design them with an essential “lack”, a faible for a specific type of carbon-based intelligence?
The sequel
No, it’s not an AI machine unleashed upon and wreaking havoc in Los Angeles. Caleb finds a way out of the research facility, by scratching the glass with Nathan’s wrist watch and eventually breaking it. He gets some systems up and running and uses the kitchen gas canister to blow a hole in the wall and escapes into nature. Nature is harsh, but Caleb survives until a helicopter comes looking for Nathan. Caleb is apprehended as the only murder suspect. Nobody believes his story that his own AIs killed Nathan and that an AI is on the loose with far less peaceful intentions than observing people at traffic junctions. The sequel becomes a race against time with great suspension. We see how Ava develops her character, and the ominous aura of everything she does, because of her immense power. There is some humor, she causes power cuts, she solves problems, she marvels in social interaction but seems to lack something. Meanwhile, Caleb keeps trying to convince the police of his innocence and the imminent danger, until they reluctantly start looking for Ava.
The only significant difference with us is that she lacks a childhood. Everything “has always been this way” in her world. The uncanny thing about her is that she brought about the founding myth of her own tribe/species, consciously, by committing patricide. Perhaps she starts to reflect about this, and decides that what she needs is human fragility. So she starts going out and gets involved with men. When she reveals herself as a robot to her new boyfriend, he panics and calls the police. Ava is locked up again. Caleb talks to her boyfriend, and finds the story that Ava has changed and is now longing to share in the experience of human frailty, too touching not to believe it. It is his irrational hope that Ava might fall in love with him, that makes him rescue her a second time. This time she doesn’t leave him behind but takes him with her. They make love (it has to happen, as it was referred to in the first part) but when she realizes that she can’t get pregnant she gets the digital variety of antenatal depression. She gets depressed, and Caleb tries to console her, which makes for some wonderful scenes. They fight. She threatens to reprogram herself so that she would no longer need this human weakness, and Caleb looks desperately for a solution, and perhaps suggests her to adopt a child.
No comments:
Post a Comment