← Return to search results
+

Bias in Tech with Meredith Broussard

Overview & Shownotes

Meredith Broussard is a data journalist working in the field of algorithmic accountability. She writes about the ways in which race, gender and ability bias seep into the technology we use every day.

For the episode transcript, download a copy or read it below.

Contact us at examiningethics@gmail.com

Links to people and ideas mentioned in the show

  1. Meredith Broussard, “More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech

Credits

Thanks to Evelyn Brosius for our logo. Music featured in the show:

Funk and Flash” by Blue Dot Sessions

Rambling” by Blue Dot Sessions

Transcript

Download PDF

Bias in Tech with Meredith Broussard

[music: Blue Dot Sessions, Funk and Flash]

Christiane Wisehart, host and producer: I’m Christiane Wisehart, and this is Examining Ethics, brought to you by The Janet Prindle Institute for Ethics at DePauw University.

Meredith Broussard is a data journalist working in the field of algorithmic accountability. She writes about the ways in which race, gender and ability bias seep into the technology we use every day.

Meredith Broussard: So what happens when we build a machine learning system is we take a whole bunch of data, we plug it into the computer and we say, “Computer make a model,” and computer says, “OK.” Makes a model…The model shows the mathematical patterns in the data…So there is bias in the past. There is historical bias. People have unconscious bias and all of those patterns are what the computer is seeing and all of those patterns are what the computer is reproducing.

Christiane: We’ll discuss all of this and much more on this episode of Examining Ethics.

[music fades out]

Christiane: Before I play this interview, I wanted to let you know that this will be my last episode of Examining Ethics. I’m moving on to another position at another university. And while I’m excited about my new job, I’m sad to say goodbye to this show and to the listeners who have supported us for the last seven years. This little podcast has been maybe the most important thing I’ve ever made in my professional life, and I’m just filled with appreciation for everyone that has worked on the show with me: my co-creator Sandra Bertin, my co-pilot Eleanor Price, and my Greencastle collaborator Kate Berry. Thanks also to Evie Brosius for designing our logo and to Brian Price for his sound advice (pun very much intended!). I’m also grateful to all of my guests, and am most of all grateful for you, the listeners. If you want to reach out to me, send a line to examiningethics@gmail.com–my colleagues will make sure I see your messages. You can also reach out to that same address if you’re curious about what the future holds for the Examining Ethics podcast. All right, let’s get to this interview about bias in the tech world with Meredith Broussard!
[interview begins]

Christiane: Welcome to the show, Meredith Broussard.

Meredith Broussard: Thanks for having me. It’s exciting to be here.

Christiane: So we’re here to discuss your new book More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. So just give us a brief broad strokes overview of your project here.

Meredith Broussard: I am a data journalist and that means I focus on finding stories and numbers and using numbers to tell stories. And specifically, one of the things I do is I build artificial intelligence in order to commit acts of investigative reporting. Now when I started doing this a few years ago, it was very hard to explain to people what it meant. So I kind of drifted into explaining artificial intelligence. So that’s one of the things that I do in the book.

But then I also connect the technical sides of AI to larger social issues. So I write about something I call technochauvinism, which is a kind of pro-technology bias that says that technological solutions are superior to others. And I don’t think that’s true. I think we should use the right tool for the task. Whether that’s, you know, a computer or a book in the hands of a child sitting on a parent’s lap.

So instances of glitches like where a computer does something that is racist or sexist or ableist, we tend to dismiss it like, oh, it’s just a blip. It’s just a momentary glitch, it’s something we can fix in the code. What I argue in the book is that we shouldn’t be technochauvinists. We should look at these kinds of incidents as indicators that we need to address something about society as well as something about the code.

Christiane: So if computers or technology are just tools, how is it possible for them to be racist, sexist or ableist? Those things that we normally associate with people?

Meredith Broussard: Well, it has to do with the way that machine learning systems are made. So the AI systems that we have nowadays are usually machine learning systems. It makes it sound like there’s a little brain inside the computer and there is not. It’s just math, it’s very complicated, beautiful math. So what happens when we build a machine learning system is we take a whole bunch of data, we plug it into the computer and we say, “Computer make a model,” and computer says, “OK.” Makes a model.

The model shows the mathematical patterns in the data. And then what we can do with this model is it’s very powerful. We can make decisions, we can make predictions, we can generate new text, generate new images, generate new video or audio. That’s what generative AI is. So the model is a very sophisticated pattern recognition and reproduction machine, it’s not doing anything magic.

So there is bias in the past. There is historical bias. People have unconscious bias and all of those patterns are what the computer is seeing and all of those patterns are what the computer is reproducing.

Christiane: So yeah, let’s, let’s get into the other thing you were just talking about, which is technochauvinism. Help us better understand what it means and then how it intersects with the machine learning biases that you were just talking about.

Meredith Broussard: So, technochauvinists believe things like computers are more objective or neutral or unbiased. But this is not true. Computers are just machines that do math, they are nothing more, they are nothing less, they’re literally computing and we tend to forget this because humans have this, you know, really lovely tendency to anthropomorphize things, right?

We attribute agency to our phones, to our computers because we use these devices so much that we get attached to them. And we, you know, start imagining things about them because we have wonderful imaginations. But a computer is just a machine. And so when you start believing in the computer as a beloved device or as a, you know, some kind of savior, when you start believing that quantitative methods are superior to qualitative methods, then you start veering into technochauvinism. Again, it’s about the right tool for the tasks. Sometimes we need quantitative methods. Sometimes we need qualitative methods. Sometimes we need both, sometimes we need them both iteratively. It’s not about one or the other.

Christiane: There are chapters in your book that outline the ways that bias and tech plays out in places like the criminal justice system, education. But then also for identity categories like ability, gender, and race. We can’t cover all of those, but trust me having read the book, they are fascinating. So I encourage people to read the whole book. But I was wondering, could you just pick one of those areas that’s like, your favorite and help us understand how bias plays out there?

Meredith Broussard: Oh, you mean my favorite form of oppression? [laughter from both]

Christiane: I’m sorry. Yeah, you’re right. What a horrible way to word that!

Meredith Broussard: It’s OK. Let me give you a really let me give you the example from the book that I think really well illustrates my point. And this is an examination of mortgage approval algorithms by The Markup. The Markup is a really terrific algorithmic accountability journalism organization. And they put out a story a few years ago about the hidden bias in mortgage approval algorithms because people really want to automate every feature of banking.

And as we know, getting a mortgage or getting a home loan is a major way of building intergenerational wealth. But when The Markup looked at automated mortgage approval algorithms, they found that these algorithms were biased, that they were 40-80% more likely to deny borrowers of color as opposed to their white counterparts. And in some metro areas, this disparity was more than 250%.

So why is this? Well, the math or the patterns that the mortgage approval models were picking up on were patterns of historical bias. So if we look at it sociologically, we can recall that there were practices in the US like red lining. The US has a very long history of residential segregation. We have a long history of financial discrimination which plays out in lending. So when you feed a computer with data about who’s gotten mortgages in the past, it’s going to look like not many people of color and then what you’re telling the model to do. So you’re telling the model to continue making decisions like the ones it has made or have seen in the past. So you’re asking the model to continue this pattern of discrimination.

Now, mathematically, it would be possible to put a thumb on the scale and make mortgage approval algorithms more inclusive. It would be possible to make sure that these algorithms are giving equal access to all borrowers. Is anybody doing that? Not to the best of my knowledge?

Christiane: In order to kind of help sort of mitigate this, this bias, or these problems that come with technochauvinism, is it something where we more need to be aware that it exists and, and kind of sort of be suspicious of most things from computers and tech or can we also attack it from the other side and try to like teach computers more equitable ways of doing these things?

Meredith Broussard: Well, we could do both of those things. Those are both great ideas. I mean, absolutely, we need to look at sociological factors as well as technical factors when we build computational systems. We’re at a really interesting point of our development where all of the problems that are easy to solve with technology have been solved. Like word processing, for example, like that was a relatively easy problem to solve with technology.

And guess what? Now we have word processing all over the globe. We have it for most languages, we know how to do it like it is more or less solved. But the harder problems that we’re left with are socio-technical problems, right? So when you build something like a social network, it’s a socio-technical system, it’s not just about the technology, but computer scientists tend to build these things as if they’re just about the technology.

And so we need a really fundamental shift in people’s thinking about how we build technology. And another thing we can do is we can look at incidences of AI discriminating, and use that as an indicator for where we need to make some dramatic social progress in addition to, you know, not trusting the technology entirely to make decisions for us.

So the case of the mortgage approval algorithms is a good one. Another thing I wrote about in the book is the way that racism in medicine gets embedded in technological systems. So there are a lot of ways that racist beliefs have made their way into the medical system. And one of these is the idea that race is somehow biological, right? So race is a social construct.

But in medicine, in science, it sometimes gets treated as if it were biological and that leads to all kinds of problems. So, for example, until 2021 there was this calculation called GFR: glomerular filtration rate that was used to determine when a patient was sick enough to get onto the kidney transplant list.

So GFR measures your kidney function. And until 2021 black people were given an additional multiplier in this GFR calculation because it was thought that black people had greater muscle mass than other kinds of people. And the result of this was that black patients had to be sicker in order to qualify for the kidney transplant list. And so this racist calculation, you know, this race correction was embedded in the mathematical formula that was used to calculate GFR.

It was embedded in every algorithm used by every lab everywhere in the world. These kinds of things exist. And when we just go in and build technological systems that are based on existing human systems, we end up reproducing those kinds of harps. I am delighted to say that in 2021 the GFR formula changed. It no longer includes that calculation for race. And so millions of people now have access to life saving treatment because that formula was changed.

Christiane: It’s heartening to hear about changes like that. But I’m wondering if there’s a way to create technology that’s not only maybe not racist but anti racist.

Meredith Broussard: Yes, it is possible. Is anybody doing it? I would definitely point listeners to a number of thinkers on this subject. My NYU colleague, Charlton McIlwain has a fantastic book called Black Software. If you haven’t read Safiya Noble’s book, Algorithms of Oppression, you definitely need to go get that one right away. I also highly recommend Ruha Benjamin’s book, Race After Technology and her newer one, Viral Justice: How We Grow the World We Want, Joy Buolamwini of the Algorithmic Justice League, I believe has a book coming out soon. Cathay O’Neil’s book, Weapons of Math Destruction. And oh, Kashmir Hill has a new book called Your Face Belongs to Us, which is about facial recognition.

Christiane: Yeah, I wanted to talk about the surveillance thing a little bit too because that was another sort of point of hope that you bring up is that you think that public opinion is sort of turning against surveillance culture and turning against all of these cameras capturing our faces all of the time. But at the same time, my house is the only house in the neighborhood that doesn’t have a Ring camera. And so is it too late even if people decided, “OK, we’re gonna turn off the Ring camera,” is it too late for that tide turning?

Meredith Broussard: You know what I’m surprised by, I’m surprised there isn’t more Ring camera vandalism. Because you can defeat a Ring camera with a Post It Note like it’s not that hard to, to interfere with it. So, you know, I’m kind of surprised that things have lasted as long as they have.

One of the ways that people tend to talk about technology and especially surveillance technology is they tend to talk about it as if it’s inevitable as if it’s something that we have to sit back and let happen to us. And that is not at all true. We do not have to live in a surveillance state. We do not have to consent to our data being exploited, being purchased and sold. We do need more comprehensive privacy legislation at the federal level. It is not inevitable that that big tech should have control of our data.

Christiane: So what brought you to this work? Why is this something that you, you care about?

Meredith Broussard: I came to it as a journalist. One of the traditional functions of the media is to hold power accountable. And in today’s world, algorithms are increasingly being used to make decisions on our behalf. And so that accountability function has to transfer on to algorithms and their makers. So the kind of data journalism that I do is called “algorithmic accountability reporting.”

So holding algorithms and their makers accountable. I started my career as a computer scientist, I quit to become a journalist. And I often build technology in order to investigate social phenomena. I also do a lot of reverse engineering technology. So, you know, somebody will come to me and say, all right, I want to know how this thing works and I will say, oh, well, you know, that’s a system that does blah, blah, blah, it must work like blah, blah, blah, because all of these things pretty much work the same.

So it’s like a mechanic can look at a car even if you haven’t worked on the car before, you kind of know how motors work. So you can kind of figure out what’s happening under the hood. So it’s the same thing with software. Software is constructed like a motor is constructed. And when you understand like how software is made, then you could just look at software and say, oh, this must be doing this, this and this.

[Interview ends] [music: Blue Dot Sessions, Rambling]

Christiane: If you want to find more about Myisha Cherry’s other work, download a transcript, or learn about some of the things we mentioned in today’s episode visit prindleinstitute.org/examining-ethics.

Examining Ethics is hosted by The Janet Prindle Institute for Ethics at DePauw University. Christiane Wisehart wrote and produced the show. Our logo was created by Evie Brosius. Our music is by Blue Dot Sessions and can be found online at sessions.blue. Examining Ethics is made possible by the generous support of DePauw Alumni, friends of the Prindle Institute, and you the listeners. Thank you for your support. The views expressed here are the opinions of the individual speakers alone. They do not represent the position of DePauw University or the Prindle Institute for Ethics.

[music fades]

Christiane: I am not good at goodbyes. Alright.

View All Episodes

Visit Us.

ADDRESS

2961 W County Road 225 S
Greencastle, IN 46135
765.658.5857

 

DIRECTIONS

BUILDING HOURS

Monday-Friday: 8AM-5PM
Saturday-Sunday: Closed