Artificial intelligence machine learning image

Is Artificial Intelligence an existential threat to humanity?

The masterfully scripted Ex-Machina is a slow-burning, cerebral thriller which subtly exposes moral and ethical questions surrounding Artificial Intelligence (AI). Indeed, I am yet to watch a better movie on the subject.

Ex-Machina (literally, “from the machine”) revolves around three characters. First, we meet Caleb, an exceptional computer programmer. He works at Bluebook, a company that processes 94% of all internet search requests (sound familiar?). Caleb flies to a remote research facility where he meets Nathan, the founder of Bluebook. Nathan has created a humanoid, Ava, and wants Caleb to administer a Turing test. If Ava passes the test by demonstrating intelligence, Caleb will be part of the “greatest scientific event in the history of man.”

Initially, things start out well. However, near the middle of the movie we have this scene:

 

 

This is one of my favorite scenes. Caleb, by now quite attached to Ava, learns that she will be terminated, and her mind “downloaded.” He can’t hide his feelings; he is almost in tears. Nathan notices this and spitefully asks, “You feel bad for Ava? Feel bad for yourself. One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa.” Ouch!

So, are AIs an existential threat to humans? This is a central question in the field of Artificial Intelligence.

Opponents of AI advocate for caution, clearer legal frameworks, more oversight. In contrast, proponents argue that the risks are oversold, declaring that humans will always control AIs and will ultimately reap great benefits from them. Ex-Machina presents both views. Nathan works obsessively to develop AIs, yet he predicts they will lead to the extinction of humanity. Caleb, on the other hand, accepts AIs as almost human–one can discuss books, go on a date, and even fall in love with them. Nevertheless, the movie definitively answers the question of how big a threat AIs pose to humanity.

Today, AIs are poised to cause job attrition, and not just in factories. IBM’s Watson, a weak (or narrow) AI, recently diagnosed a rare disease that had baffled doctors for months. Why consult your local physician, when Watson can crosscheck your symptoms against millions of medical records and instantly provide a diagnosis? In such cases, the synergy of humans and AIs will be beneficial to humanity. Even in the creative fields, machine learning is growing exponentially, with AIs cooking, painting, and composing music. Humanity is faced with a dwindling set of tasks that we can perform better than machines; what will happen when, God forbid, they can outperform us at every single task? Perhaps we can answer the question better if we don’t frame the situation as a zero-sum, or “us versus them” scenario. In the end, I suppose we’ll just have to wait and see what happens.

Let’s hope Nathan was wrong.

Leave a Reply