The “Terminator” movies may be more prescient than people thought.

With AI becoming omnipresent and advanced — techsperts worry that the human race will be wiped off the map by synthetic viruses and other means if we don’t hit the kill switch.

Computer scientists Eliezer Yudkowsky and Nate Soares made this apocalyptic prediction in their dystopian new book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.”

“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die,” the AI gurus, who work at Berkeley’s Machine Intelligence Research Institute (MIR) warned in the intro to the none-so-subtle tome.

Many techsperts see AI as offensive — which has made it ubiquitous in every sector from academia to dating — as a natural evolution and boon to humanity.

Much like in every sci-fi flick from “Ex-Machina” to “Terminator,” the tech could even evolve to the point where it calculates that humans are no longer necessary, the experts say.

“Humanity needs to back off,” warned Yudkowsky, who has been warning about tech-istential risks posed by AI for years on the website he helped to create, LessWrong.com.

“If any company or group, anywhere on the planet, builds an artificial superintelligence then everyone, everywhere on Earth, will die.”

The team of experts is taking this seriously, believing that “only one of them needs to work for humanity to go extinct.”

Experts fear that eventually power plants and factories will be run by robots instead of humans, after which they’d deem us disposable and bring about our techstinction, Vox reported.

To make matters worse, humans’ comparatively little brains might not even be able to comprehend its angle of attack in time. The authors analogized our naivety to the Aztecs getting invaded by the Spanish Armada; the idea of “sticks they can point at you to make you die” – AKA firearms – would have been difficult to wrap their heads around.

Even if there was a tell, this chameleonic tech could potentially keep its malevolent intentions concealed until it’s presumably too late to pull the plug. “A superintelligent adversary will not reveal its full capabilities and telegraph its intentions,” warned the duo, the Daily Star reported.

In fact, the only way to stop judgment day, per the scientists, would be to nip it in the bud by preemptively bombing any data centers that show signs of “artificial superintelligence.”

Despite seeming far-fetched, the pair put the chances of an AI apocalypse at between 95% and 99.5%.

“It is not even worth taking extra steps into the AI minefield, guessing that each step might not kill us, until finally one step does,” they write.

Share.

Leave A Reply

Exit mobile version