Read More:

(Note to the reader: This article discusses a philosophical inquiry that many people find deeply, emotionally disturbing. Truly, and in all sincerity: If you’re susceptible to existential dread, stop reading.)

Much has been said in recent years of the purported dangers and lethalities of artificial intelligence (AI). Technologists such as Elon Musk have said that AI is “far more dangerous than nukes,” as CNBC says, and that a lack of regulations mediating the relationship between man and machine is “insane.” The difference, he cites, is between case-specific AI — algorithms that control, say, what ads are pushed your way on Facebook — and AI with an open-ended utility function, which basically teach and write themselves. Era-defining physicist Stephen Hawking said the same before he passed away, as Vox recounts, as have AI researchers at Berkeley and Oxford.

Science fiction (or speculative fiction, if you prefer) has been yammering about cruel AI overlords for decades — Skynet and John Connor-killing Terminators come to mind — while it’s also portrayed beneficent androids such as Data from Star Trek: The Next Generation. Legendary English mathematician Alan Turing created the “Turing Test” in 1950 to spot AI in a conversation (“Blade Runner,” anyone?), under the assumption they wouldn’t sound human. Before that, Isaac Asimov in 1942 developed his rules of robotics dictating how we ought to code machines to protect ourselves from them. At minimum, such discussions reflect fears about the future, alienation in a digitized world, and algorithmic control over daily life.

But what if an all-powerful AI was even more dangerous — much more — than we think? What if merely by thinking about it, we doom ourselves to everlasting torment under its eternal watch?

Read More:

Click Here To Read More