Software

Glaze aims to protect artists from AI

A digital tool from the University of Chicago wants to make sure AI is not stealing your art.
article cover

Fascinadora/Getty Images

· 3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

A tech tool from the University of Chicago aims to prevent AI from being like your friend in high school and totally stealing your look.

Artists can submit their work to a platform called Glaze, which applies small changes to a digital image—tiny modifications to pixel color here, little artifacts added there—and gets the model trained on a different version of a design that someone wants to protect from theft.

A lead developer on the Glaze team thinks of the artwork changes like UV light—something a human can’t see, but AI can.

“The models, they have mathematical functions looking at images very, very differently from just how the human eye looks,” Shawn Shan, a grad researcher at the University of Chicago who demoed the tool for IT Brew, told us.

An invite-only web version of the mechanism allows someone to send images from phones, tablets, or any device with a browser. You just have to upload your work, provide an intensity of the change, and hit “Run.” WebGlaze emails you the result and then deletes all images, according to its site.

Is AI stealing your whole vibe? Some artists have found their style imitated by the new class of generative outputters available online. In February, the New Yorker reported on a class-action lawsuit against AI-image generators. Generative-art services like Midjourney and Stable Diffusion are trained on large datasets of online images, like not-a-Star-Wars-droid “Laion-5B.”

“I can see my hand in this stuff, see how my work was analyzed and mixed up with some others,’” Kelly McKernan, an artist in Tennessee, told the magazine.

Shan sees Glaze as a way of defending artists like McKernan. “The goal is to protect art style from being copied from artwork itself. These models are powerful. They can learn a style from just five to 10 images,” Shan told IT Brew.

AI’s glazed over. Glaze has its critics. It began as a pixel-adding, facial recognition-fooling tool called “Fawkes” in 2020.

Some skeptics, according to Shan, believe the platform blocks the progress of AI, or think the system won’t trick all models.

Proving Glaze works is also a difficult task, which has forced Shan and his colleagues to create their own model and test it out.

“We have been trying to just really communicate the limitation to artists and that this is a stopgap,” he said.

The deliberate disruption of AI is ongoing, and a team at the University of Chicago recently released a new platform, called Nightshade. If Glaze puts the brakes on AI models stealing art, Nightshade knocks them out entirely.

“Poisoned data samples can manipulate models into learning, for example, that images of hats are cakes, and images of handbags are toasters,” a recent story in the MIT Technology Review read.

Glaze has a different goal, Shan said: to keep artists’ work out of AI models, not to corrupt them.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.