IT Strategy

Voluntary agreements to fight deepfakes aren’t enough, CISA chief warns

CISA Chief Jen Easterly warned of “unimaginable harm” if legislators allow AI devs to act with “complete impunity.”
article cover

Wildpixel/Getty Images

3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Voluntary commitments to fighting deepfakes just aren’t going to cut it, according to one of the nation’s top cybersecurity officials.

Cybersecurity and Infrastructure Security Agency (CISA) Chief Jen Easterly warned at the Washington Post’s Futurist Summit last month that generative AI is certain to “inflame” existing threats around the spread of disinformation online before the November elections. Tech firms which have promised to fight deepfakes might skirt those commitments given they are not actually obligated by law to do so, and face fewer regulations around AI development in general than they do in the European Union, Easterly added.

“I do think efforts to ensure that anybody can tell whether a video is generated with AI capabilities, whether it’s a deep fake, that is very important,” Easterly told attendees. “And so the problem is, however, there is no real teeth to these voluntary agreements.”

“There needs to be a set of rules in place, ultimately legislation,” she added. “I know Congress has put out a framework on this, there’s the EU AI Act on it, but frankly there needs to be safeguards put in place, because these capabilities are incredibly powerful.”

Easterly specifically mentioned a recent agreement at the Munich Security Conference between 20 major firms, including Google, Microsoft, Meta, and OpenAI, to fight AI-generated election disinformation as insufficient. AI developers driven by profit incentives could allow US adversaries to cause “unimaginable harm to populations around the world” if they are allowed to “operate with complete impunity,” Easterly concluded.

While Easterly was primarily addressing the potential that malicious actors could sway elections with AI-generated hoaxes, the EU AI Act imposed transparency regulations on a variety of “high-risk” sectors beyond elections, as well as banned use of AI that threatens civil rights.

In the US, a bipartisan group of senators has been working on potential AI legislation, but the Washington Post separately reported there are few signs they are close to an official proposal. Sen. Mike Rounds (R-S.D.) told the paper senators are working on an “incentive-based” model, adding the EU approach will “chase AI development to the United States.”

The threat posed by deepfake technology also extends to cybercrime, where cloned voices, images, and video have obvious utility in scams and theft. Tools to create real-time deepfakes of at least rudimentary quality are easily found online, and attackers successfully used them in a $25 million theft from a Hong Kong firm earlier this year.

Security vendors have also warned AI is already being used to juice phishing schemes. University of Waterloo researchers bypassed several leading voice authentication systems last year by adjusting cloned audio to have lifelike imperfections.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

I
B