· less than 3 min read
Top insights for IT pros
From cybersecurity and big data to software development and gaming, IT Brew delivers the latest news and analysis of trends shaping the IT industry, like only The Brew can.
A recent joint statement warning AI could usher in the apocalypse reads like a who’s who of corporate leaders developing the very technology they’re cautioning against.
On May 30, the nonprofit Center for AI Safety released a one-sentence statement warning that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” It was signed by 350+ scientists, researchers, and executives—among them OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei, not to mention a slew of C-suite leaders from other firms with AI ambitions, like Microsoft CTO Kevin Scott.
As the New York Times noted, prominent Turing Award-winning AI scientists Geoffrey Hinton and Yoshua Bengio were also signatories, though fellow award winner and Meta AI research lead Yann LeCun did not.
Avivah Litan, VP and distinguished analyst at Gartner, told Computerworld the statement was “without precedent.”
“When have you ever heard of tech entrepreneurs telling the public that the technology they are working on can wipe out the human race if left unchecked?” Litan told the site. “Yet they continue to work on it because of competitive pressures.”
Many of the signatories have previously expressed concerns about artificial general intelligence (AGI), referring to hypothetical machines which demonstrate human-level capacity at most tasks. While arguments between AI researchers and entrepreneurs about how close the current state of the technology is to AGI have been contentious, many experts tend to view it as a distant prospect.
Some of Altman’s proposed solutions—such as creating regulatory and licensing bodies to oversee and restrict use of AI—have run into pushback on the grounds they could hand a handful of companies a de facto monopoly on AI development. A Tow Center analysis found media coverage of AI has paid disproportionate attention to hyped-up existential risks over more mundane issues, like AI’s contributions to inequality or other short-term harms that might be more deserving of lawmakers’ immediate attention.