Software

What’s so scary about AI? (Besides human extinction)

Researchers guess likelihood of your AI nightmares coming true.
article cover

Moor Studio/Getty Images

· 3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Phew, experts only offered an average estimate of 5% for the chance of future AI advances causing human extinction. But wait—is 5% good or bad?

Mass catastrophe was just one of many theoretical AI risks posed to field experts in a recent survey from the Berkeley, California-based think tank AI Impacts. Other questions included: How soon will AI write a novel or short story good enough to make it to the New York Times best-seller list? (About 7 years until there’s a 50% chance.) And how soon will AI produce a song that is indistinguishable from a new song by a particular artist? (About 4 years until there’s a 50% chance.)

The international study surveyed 2,778 “AI researchers who had published peer-reviewed research in the prior year in six top AI venues.”

While almost one-half of experts gave a median chance of 5% that AI will cause “extremely bad outcomes such as human extinction,” the scenarios that gave the pros the most concern were the spreading of false information through deepfakes, the AI systems’ use in manipulating large-scale opinions, and the deployment of AI by authoritarian rulers to control the population.

As far as how human extinction might play out, AI Impacts Lead Researcher Katja Grace offered a classic scenario: free agents who develop their own objectives and replace humans.

“If they have their own goals, they’re kind of acquiring resources, trying to bring about their goals. I think the thought is that in the long-term, any goals that are not prioritizing human welfare are going to come into conflict with humans,” Grace said.

So, is this “5%” human extinction stat good news or bad news?

“I think it’s totally bad news,” she said.

But some IT pros consider the catastrophic questions to be less productive.

“We’re always thinking about: Will AI take over the world? Will it make decisions that kill humans?...Very few people are thinking about: What happens when I put critical information about my business into a platform that’s making decisions, and then who owns that at the end of the day?” Jon Marler, manager and cybersecurity evangelist at VikingCloud, told IT Brew in December.

Grace sees the survey’s importance in impacting the conversation as AI policy-making begins and even as professionals seek careers that may be impacted or taken over entirely by AI.

“It’s really important to have AI researchers’ voices in that conversation. Both because they’re relatively informed about the topic, even if it’s quite hard to guess what will happen, and also because they are more responsible for what happens than most people,” she said.

More responsible by a few percentage points at least.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.