It’s safe to say that all tech is not created equal. It’s the biases behind modern technology that NYU data journalism professor Meredith Broussard asks the reader to consider in her new book, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, which delves into how prejudice is built into modern AI and computing technologies, often unintentionally.
Broussard joined IT Brew in March for a Book Club conversation on bias in tech.
This conversation has been edited for length and clarity.
People tend to think of computers and AI as inherently objective. You argue that that’s a problematic position. Can you explain that a little?
The idea that technology is objective or without flaw is itself a kind of bias that I call “techno chauvinism.” Techno chauvinism is the idea that technology is superior to other things.
What I argue instead is [that] we should think about using the right tool for the task. Sometimes the right tool for the task is a computer. Sometimes it’s something simple, like a book in the hands of a child sitting on a parent’s lap. One isn’t inherently better than the other.
When we’re talking about technology and AI, we should talk about the context in which the technology is used. At the beginning of the digital era, it was very easy to say, “There’s going to be so much technology in the future, and it’s all going to be great,” because we didn’t know very much then.
Now we know more, and we know that it depends on the technology, it depends on how it’s built, it depends on the context.
Can you explain more about the context of the technology’s use and deployment?
A use of facial recognition that is pretty low-risk is using facial recognition to unlock your phone. Mine doesn’t work half the time…but there’s a pass code. The stakes are incredibly low. A high-risk use of facial recognition would be facial recognition used on real-time video feeds by police, because the risk of misidentifying people with darker skin is very high. It leads to greater police surveillance and harassment of communities of color.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Technology includes the biases of its creators. So, when we have a small and homogeneous group of people creating technology, the technology gets the blind spots of that small and homogeneous group of people.
I think it’s important to emphasize that the people who make AI systems and the people who buy, say, smart city technology, or buy AI systems are not acting out of malevolence. I don’t believe that software developers get up in the morning and say, “I’m going to make a racist AI.” I don’t believe that they set out to make technology that discriminates against people with disabilities.
You’ve laid out some of the problems here, but what’s the solution? How can we address these issues both from an IT and a social perspective?
This is a complicated problem. It’s taken us several decades to get into this mess, and so it’s going to be complicated to get out of it. If there were an easy solution, somebody would have found it by now. If this was something we could code our way out of, we would have done it. No question.
I would say the first step is to stop expecting computers to be able to solve social problems. We should think about them as socio-technical systems. Social scientists need to learn more about how technologies work, technologists need to learn more about social science.—EH