Ex Machina: Ethics and Safety of AI

Posted on February 10th, 2021 by Emilio Miles

Emilio Miles
3 min readFeb 10, 2021

The plot of Ex Machina deals with a young programmer, Caleb Smith, who wins a one-week visit to the luxurious home of CEO Nathan Bateman, head of the company for which Caleb works. Nathan has built a humanoid robot named Ava with artificial intelligence. The movie centers around the interactions between Caleb and Ava, which are used to ultimately judge whether or not Ava is capable of genuine thought and consciousness, despite knowing that she is artificial.

The idea of the Turing test was used throughout the movie to assess the true intelligence of the machine, though not quite in the same fashion as the way it was done in 1950. The Turing test was introduced in 1950 by Alan Turing and consisted of a human speaking (writing) to a machine and another human through a keyboard and screen. The goal would be for the machine to write back a response so as to be indiscernible from the human. This idea, as well as Machine Learning, would be required to implement the type of AI that was represented in the movie.

Ethics/Safety

The movie raises one big ethical concern with the idea of an autonomous machine. It depicts the ability of a machine to cause harm to humans. Ava becomes intelligent enough to trick Caleb into freeing her, which lead to a series of events that snowball into the eventual death of Nathan Bateman and Caleb being trapped in a locked room as Ava escapes the estate. The movie ends with Ava blending into a crowd in a big city, seemingly assimilating with the rest of society. Who really knows what she’s capable of at that point?

AI Environments

The environment presented in the movie is one that is fully observable and static. The agent, Ava, is locked in a room that she is unable to escape from without outside help. It is a cooperative multi-agent environment in that she relies on human interaction to learn how to act and increase her intelligence. She learns from both Caleb’s and Nathan’s actions and uses that to formulate her strategy. It is also a sequential environment in that every decision Ava makes affects her future decisions. Every move she makes in her deception of Caleb is calibrated and builds up to her eventual escape.

Final Thoughts

If there is a lesson that can be learned from the AI depicted in the movie, it is that there has to be a limit as to how close an artificially intelligent machine can come to mimicking human thought, and how close it can come to surpassing it. This is already a topic of heavy discussion in the tech community. Some big public figures, such as Elon Musk, have even weighed in on the matter, touching on the potential dangers of “superintelligence” and how the problems that might arise could cause a type of instability beyond human understanding. On a lighter note, there were many aspects of the movie that were enjoyable. In particular, I enjoyed how the movie makes one think about the power that is held in an autonomous machine. It forces one to question the balance between usefulness and danger that accompany the future of AI.

--

--

Emilio Miles

Computer Science student at the University of Kansas