yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Would companies be more diverse if A.I. did the hiring? | Joanna Bryson


5m read
·Nov 3, 2024

Can AI remove implicit bias from the hiring process? “Remove”, entirely remove? No. But as I understand, I've had multiple people tell me that it's already reducing the impact of implicit bias, so they're already happy with what they're seeing.

So what is implicit bias, first of all? It's important to understand that implicit bias and explicit bias are two different things. Implicit bias is stuff that you're not conscious of; you're not aware of it; it's hard for you to control; it's probably impossible for you to control. It's impossible for you to control, right now, on demand. You might be able to alter it by exposing yourself to different situations or whatever and changing what we in machine learning call priors—so changing your experiences. So maybe if you see more women in senior positions, you'll become less implicitly sexist, or something like that.

But anyway, explicit bias is like “I’m going to choose to only work with women” or “I’m going to choose only to work with men” and I know that and I'm conscious about it. So HR departments are reasonably good at getting people who hopefully honestly are saying “yeah, I'm not going to be racist or sexist or whatever-ist, I'm not going to worry about how long somebody's name is or what the country of origin of someone of their ancestors is.” So hopefully, HR people can spot the people who sincerely are neutral, at least at the explicit level.

But at the implicit level, there's a lot of evidence that something else might be going on. Again, we don't know for sure if it's implicit or explicit, but what we do know is that in the paper we did in 2017, one of my co-authors Aylin Caliskan had this brilliant idea of looking at the resume data. So there's this famous study that showed that you have identical resumes and the only thing you do is have more African-American names versus European American names, and the people with European American names get 50 percent more calls in to interview with nothing else changed.

And so now people are talking about “whitening” their CVs just so they get that chance to interview. So anyway, it looks by the measures that we used with the vector spaces as if the data and the implicit bias that also explains implicit bias also explains those choices on the resume. So does that mean people are looking at it and explicitly saying, “Oh, I think that's an African-American?” Or were they just going through huge stacks of CVs and some didn't jump out at them in the same ways that others did? Because we're pretty sure when it comes down to like they're all sitting in the room together that that point was okay.

And so what the AI is doing for them is it's helping them pick out the characteristics they're looking for and ignoring the other characteristics. So they're helping them detect the things that they wanted to be: when they were sitting in the room with multiple eyes looking at something, that they were looking at the right starting place and then they're able to find - they're finding people that were falling through the cracks.

A lot of people have trouble that there's not enough good people applying or that they thought there weren't enough good people applying, but actually, they were missing people because they didn't see the qualifications buried in the other stuff when they're leafing through these stacks. So a lot of people are reporting that they have great data or they're very pleased with the results, but that's privately and it's off the record and I can't get anyone to go on the record.

I just recently at Princeton, the Center for Information Technology Policy ran a meeting about AI, and somebody, again in Chatham House, I can't say who it was, but an organization that's sort of between corporate and—anyway, it's a special kind of organization, they said that they're going to try and do this and so I begged them to document it. I said, “Look, you're in a different situation you don't have ordinary customers, please document the results fully and then publish papers about it so we can really see what the outcomes are.”

So I hope we'll have that data, but so far I could only tell you that people are saying it really is working. One of the possible shortfalls of that kind of situation, well first of all, being sure that you can eliminate bias that way, no; there's all kinds of ways you can accidentally pick up on things. So even if you don't have gender you might recognize gender from the name, for example. So there's ways that machine learning picks up on regularities that are illegal and, again, you have to do your own auditing and make sure that that isn't happening.

And I guess that's the biggest concern. Of course, anytime you scan something and make it digital the net makes it amenable to hacking, so you have to be careful about that. And I guess the biggest thing is don't believe that just because you've automated part of a process you've made it fair. You have to keep checking—just like anything else, you keep going to improve.

But yeah, when you put these things in front of you and when you write them down, then yeah, you have the potential to keep improving. I guess there's one other thing, which I haven't mentioned, which is that once you've automated the process you do open the door for somebody who is, say, an evil racist to go in and actually tweak things and make it so that you get all one race.

So you need to make sure that there's adequate oversight and regular auditing because people worry about accidentally introducing bias, and that's good, we should worry about that, but we should be really worried about deliberately introducing bias. That's the thing that I think, again, because people think artificial intelligence is like space aliens that are kind of - it's actually almost like sort of the Greco Roman or Nordic gods or something, like “Maybe we can pray to them correctly and they'll give us what we want, but they're capricious and we're not sure.”

No, it's not like that. It really is something that we have an opportunity to try to fix it, and it works in systematic ways, but it's important to understand that people are writing it, and that means that some people will make mistakes, some people will be sloppy, some people will do what they seriously think is the best thing, but it actually isn't legal and some people will go out of their way to do bad things because they're just vandals or because that's how they got elected or whatever.

More Articles

View All
Approximating asymptotic limit from table
Function f is defined over the real numbers. This table gives a few values of f. So when x is equal to -4.1, f of x is 5. f of -4.01 is 55. They give us a bunch of values for different x’s of what f of x would be. What is a reasonable estimate for the li…
Analyzing mosaic plots | Exploring two-variable data | AP Statistics | Khan Academy
We’re told that administrators at a school are considering a policy change. They survey a group of students, staff members, and parents about whether or not they agree with the new policy. The following mosaic plot summarizes their results. Which of the f…
What if the Moon was a Disco Ball?
Hey, Vsauce. Michael here. If we turned the Moon into a giant disco ball, day and night would not be a disco party. Instead of diffusely reflecting sunlight onto all of us, a mirror-tiled moon would reflect specularly. You would be lucky to momentarily ca…
Equations with rational expressions (example 2) | Mathematics III | High School Math | Khan Academy
So we have a nice little equation here that has some rational expressions in it. And like always, pause the video and see if you can figure out which X’s satisfy this equation. Alright, let’s work through it together. Now, when I see things in the denomi…
Why You Should Put YOUR MASK On First (My Brain Without Oxygen) - Smarter Every Day 157
All right, I’ll make it super fast. It’s me, Destin. Welcome back to SmarterEveryDay. When you’re in a jet, if the cabin depressurizes, they drop this little mask out of the top. What happens if you’re in a depressurized cabin and you’re up above 15,000 f…
How To Get Rich In The 2023 Bull Market
What’s up, Graham? It’s guys here, and according to a recent survey, you need 2.2 million dollars to be considered wealthy. Although even though that might sound unobtainable, the truth is 2.2 million dollars is a lot closer than what you probably think. …