Software

Scale AI CTO on China’s role in the AI race

IT Brew caught up with Scale AI Field CTO Vijay Karunamurthy to chat about China’s role in the AI industry.
article cover

Saul Loeb/AFP via Getty Images

3 min read

Scale AI—an AI company that provides data labeling, data curation, and reinforcement learning from human feedback (RLHF)—is rooting for the US in the AI race against China.

IT Brew caught up with Vijay Karunamurthy, the company’s field CTO—which is now valued at over $13 billion—to chat about China’s role in the AI industry, national security and risk mitigation, and an AI model’s willingness to work harder when money’s on the table.

Scale AI CEO Alexandr Wang has acknowledged just how much of a superpower China is when it comes to AI—the fact that they are one of the countries leading the charge and are predicted to be an AI world leader by 2030. How is Scale AI navigating this?

“The [Chinese] government’s empowered to access a lot of data—private data from their citizens…they’re trying to find the right talent, get them into China, get them building models,” Karunamurthy said, noting that China has utilized both open-source and closed source models.

“The idea that we’re in a competition with China is really important to us,” Karunamurthy added. “You’ve seen Alexandr talk about it in congressional testimony. We really do think it’s scary—the amount of talent that’s there that’s pushing ahead. And if you think about the amount of computing resources it takes to train one of these models, it’s really the US and China that are in direct competition with each other to have the right resources, have the right talent.”

One advantage of Scale AI, he said, is transparency. “We think, as a democratic society, we can empower a lot of these decisions with testing and evaluation. So, we’re armed with accurate information, and we can make those decisions in a more open, transparent way,” he said. “And that’s a superpower that the US has that China isn’t going to have in the near future.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

In light of this, what steps are Scale AI putting into place for the next five years?

Scale AI’s Donovan—an “AI digital staff officer for national security”—is used by teams within the Department of Defense. The AI-powered decision platform helps these teams leverage intelligence reports, satellite imagery, and publications. “And Donovan will actually prepare a mission report for you as an officer where it refers back to the primary source of that data,” Karunamurthy said.

Thinking of national security, how do you go about mitigating any of those situations in which you might say, “Well, Donovan got it wrong”?

“We believe testing and evaluation first is unbelievably important. So, these models are incredibly capable…But unless you test the application, and you’re able to assess what sort of data that model needs in order to get the right answer…all of that’s incredibly important to be able to test and to be able to assess how we do that.”

Scale AI employs a “combination of human experts plus model-assisted testing,” because the models allow them to scale up. “As we’ve scaled this up, we found all sorts of ways we can introduce bias into the model’s answers. With a lot of the models out there today, it’s as simple as telling the model that you will tip the model a $20 bill if it changes its answer, and it ends up really wanting to get that $20 tip. It will suddenly change its reasoning, even in a really complicated planning test.”

This sort of bribery helps the team discover weak spots and inaccuracies as they red team and “poke” at the model.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.