John Blomster: I'm John Blomster, and today we're speaking with Dr. Ifeoma Ajunwa, Assistant Professor of Labor and Employment Law in the ILR School at Cornell University and associated faculty member at Cornell Law School. Ifeoma is a renowned expert in issues at the intersection of law and technology, including the ethical governance of workplace technologies. Ifeoma is also the recipient of the 2019 NSF Career Award and the 2018 Derek A. bell Award from the Association of American Law Schools. And she's here today to talk about the subject of a forthcoming scholarly paper focusing on challenges in automated decision making and algorithmic bias. So, today we're taking on computers and big data. So, thank you for taking the time to speak with us.
Ifeoma Ajunwa: Thank you for having me.
Blomster: Before we get into the subject of the paper, why did you want to get into the field of law and technology and devote much of your scholarly work in this space?
Ajunwa: So, for me, it was really kind of happenstance. As a graduate student at Columbia, I was researching reentry of the formerly incarcerated, but in speaking with them, it really came to mind to me that a lot of the automation of industry, but specifically the automation of decision-making in hiring was actually a way that bias was being replicated in the system, and was actually a way to call certain populations, unfortunately, from the labor market, and that this was especially impacting formerly incarcerated people. But of course, you know, the more I looked into it, the more I saw that it impacted also other types of people like older workers, you know, women who had perhaps taking time out of the workplace, etc.
Blomser: So, we live in an age where automated decision-making impacts more fundamental aspects of our lives than at any time in our history, obviously. This is something that you call the algorithmic turn. Can you explain a little bit what you mean by the algorithmic turn?
Ajunway: Yeah. So, the algorithmic turn is not necessarily original to me. This was a term first used by a researcher named Napoli and this was a researcher from communications/sts area. But what I'm arguing for is that it's not just that we now have the algorithmic turn, is our now we actually have algorithmic capture. And what I mean by algorithmic capture is that now pretty much the only way we can do certain types of things, is through automated decision-making. So, for example, in the arena of hiring, a lot of firms are now moving to solely automated hiring. So, that means that for you to apply for that firm, you have to actually fill out your application in a system that will then actually algorithmically sort your application or sort your resume.
Blomster: The thinking behind that, in addition to improving efficiency is that an algorithmic system will take out the bias that may be present in a human decision-making. And but this is something that in your forthcoming paper, you challenge that belief. What are some of the red flags that you're seeing, as more and more organizations are turning towards machine learning algorithms for efficiency and decision-making?
Ajunwa: Yeah, you're right. A lot of corporations, I believe, think that, you know, the turn to automated hiring will actually remove the bias in the hiring systems. And unfortunately, that's simply not true. Just because something is automated doesn't necessarily mean it automatically has less bias because you still have to think about how the system was created. What were the inputs into the system? What factors went into creating the algorithms, what the training data that was used, looked like? And even you've seen this in headlines involving both Amazon and Facebook.
So, Amazon, for example, created an automated hiring system that they really were hoping would help them diversify their workplace. But they found that such a system actually was biased against women. And one explanation is that the training data was the current employees who were predominantly male. So, I think a big red flag is when a firm just says, 鈥淲ell, we move to automation, so everything is okay.鈥 I think you really have to wonder if that firm has really carefully thought through the criteria that it's using, whether it's audited the results of the automated hiring for any kind of bias, and then even after the audit, what steps it took to correct that. So, we know, you know, from a whistleblower, that when Amazon figured out that its automated hiring system was bias against women, it just simply scraped that system. We don't know what new steps it then took to actually solve the problem.
Blomster: So, in discussing the issue of algorithmic bias that as you mentioned, Amazon and a number of other major companies have experienced when with certain types of these systems, it still is predicated on this belief that when it comes to decision-making, you know, humans: bad, machines: good. Humans: bias. Machines: not. But is it actually possible to remove the human element from the machine learning-based decision-making?
Ajunwa: Currently, no. We don't have the singularity where machines are able to make decisions completely on their own. The fact remains that automated decision-making being pitted against humans decision-making is really to create a false binary. Because automated decision making requires much human decision-making to make it work. So, humans are still making decisions about what inputs will go into the automated decision making and humans are still interpreting the results of automated decision-making. So, the human hand, if you will, remains entangled, even in automated decision-making.
Blomster: One of the things that is particularly concerning is the scale at which algorithmic bias propagates bias. Why is algorithmic bias more dangerous in some ways, than the implicit human bias that's involved in maybe a more traditional hiring process?
Ajunwa: That's a great question. I think bias in automated systems is even more insidious than human bias because a lot of time, it can hide under the veneer of objectivity. It can hide under the veneer of impartiality, and then people blindingly trust the results without really being critical of what the results are saying, because it is automated, and that's what makes it dangerous. Furthermore, automated decision-making can actually hide bias, such that we don't necessarily realize that bias thinking has gone into sort of the criteria behind the decision-making itself in the same way that we might notice if a human is using bias criteria. The reason because of that is because of machine learning. Automated programs can learn from how past decisions were received, and can use that to create criteria of their own without being explicit about what criteria is being created in a way that humans, you know, would have to explain, you know, why am I now making the decisions? And that means then that bias can actually be hidden while being replicated.
Blomser: So, Title VII of the Civil Rights Act of 1964 prohibits employers from discriminating against employees on the basis of sex, race, color, national origin and religion. So, does Title VII apply to employment decisions made by automated systems or are there gaps in the regulations that are governing this types of decision-making?
Ajunwa: So, that's a really good question and one that legal scholars are definitely grappling with. But of course, automated decision-making systems cannot be thought to have intent, like humans have intent. So, you really have to impute the intent to the employer. So, Title VII fortunately, does not allow for intentional discrimination, right, which is disparate treatment. It also allows for unintentional discrimination, which is disparate impact. And that is to say that even if the employer did not intend to discriminate, but engaged or used one particular criterion that resulted in a disparate impact on a protected category of workers, then that's still discrimination. So, in that way, you know, the disparate impact section of Title VII definitely still does apply when you're using automated decision-making and it results in this termination.
So, the problem, however, is that even though disparate impact as a cause of action is something that people who have experienced discrimination under automated hiring could bring, there is this huge burden of proof, which they may not be able to fulfill, because the mechanisms of providing proof are entirely under the control of the employer. And some design features of automated hiring actually make it difficult to retain the proof that plaintiffs would meet. So for example, plaintiffs to prove disparate impact would need to show the number of people who applied of a certain protected category versus the number who were actually hired. And they need to show that, essentially, people from a certain protected category who applied would disproportionately denied employment in comparison to the majority group.
The problem with automated hiring is that for some of them, or for many of them, they don't retain a record, for example of people who try to apply, but then could not complete the application because of design features that call certain categories of people.
So, one such case is a man in Massachusetts who went to complete an application and then found that he couldn't actually complete the application, because a drop-down menu prevented him from doing so. Of course, that would be brought under the Age Discrimination and Employment Act. But that act still follows the Title VII framework. But the issue, of course, is that this man had he just not realized or that, essentially, the drop-down menu was really caught off at a certain year had he not really made the connection between that and age, he would have walked away and there will be no record that he actually failed to complete his application. And therefore there could be no proof of a disparate impact on older workers.
Blomster: Has there been a successful case where a plaintiff was successful in proving disparate impact? Just because when you're dealing with so much big data, and it's such a hard thing to wrangle.
Ajunwa: Right. So, as of now, there's no known cases of people actually bringing action based on an automated hiring system. And that's because many job applicants are operating as individuals, they don't necessarily realize that when they are denied employment, it has anything to do with being a member of a protected category.
Blomster: So, exploring possible solutions that you talk about in the paper. One view of any instance of algorithmic bias is that it's a technical issue. So, if we fix the tech, we fix the system, we're good to go鈥攂ias free. But as you argue, why should we instead view the issue of algorithmic bias as one that is effectively legal in nature in order to truly understand what the problem is?
Ajunwa: So, I believe the issue of algorithmic bias is a legal problem. It's not just a technical problem, because to say that it's a technical problem is basically to say that the computer is not doing exactly what it's been asked to do, but it's somehow being incorrect in the way it's interpreting what it's being asked to do. But when I have looked at the development of automated hiring systems, what we have found my co-author and I, Dan Green is that actually, automated hiring systems are meant to clone your best people. They're meant to replicate the same people that you already have. And the way to do that on which is perfectly lawful is through cultural fit. So, automated hiring systems are really just optimizing for cultural fit. So, any bias in them is really bias reflecting that the culture of itself is already biased.
So, I think the legal issue here is why we accord this large deference to employers in terms of determining what cultural fit means for their firm. Why don't we use more sort of objective criteria in terms of thinking about job fit? And why do we just allow employers to have carte blanche in saying this person fits into my organization or not? Rather than making it about criteria that's actually probative of whether the person can perform the job?
Blomster: That's funny in terms of talking about cloning your best people. One example you use in the paper was a company where the top applicants all had the same name and all played lacrosse. And that's the result of the kind of closed loop that these these kind of algorithms are playing.
So, what what are some legal frameworks that could offer some tangible solutions to addressing this particular problem?
Ajunway: One first, you know, idea is just really to rethink this idea of cultural fit. Really push employers to really actually do job analysis, when they put out an advertisement, such that the advertisement reflects the qualities needed to fulfill the job, the qualifications needed to fulfill the job, not just you know, who fits in to an organization because that's really just taking into account who's already present in our organization. And if you have a race or gender imbalance, then that's just replicating that. Another is also to allow all the ways for plaintiffs to be able to bring suit when they have been confronted with bias in automated hiring systems. As I previously said, you have disparate impact and you have disparate treatment, but those are rather limited for plaintiffs who are looking at automated hiring systems.
And so I argue them for discrimination per se, as a third type of doctrine that could allow plaintiffs to bring suit. And what they would have to do is basically point to an egregious conduct that they see in an automated hiring system, and then be able to make the allegation that that egregious conduct can or does have a high potential to result in disparate impact. And then the burden would shift to the employer to show that that conduct is not actually leading to disparate impact.
Blomser: Finally, in your conclusion, in terms of laying out the problems with treating algorithms, like an all knowing oracle, you use an example that I love, where you take it all the way back to ancient Greek mythology. What is the parallel between the story that you use and the discussion that we're having today?
Ajunwa: The parallel that I talk about is about the Oracle of Delphi. And this oracle was a place where people could go and ask questions about their life, what they needed. And the thing there, however, is that the oracle would give you an answer. But you have to interpret the answer. So, unfortunately for many of the people, because they were like adherence, right, they were the faithful, they just took the answer at face value, rather than really thinking about what the answer really represented. And this would really lead a lot of them astray.
So specifically, the king of Lydia asked this oracle, if I went to war, would I win? And the oracle answered, if you go to war, an empire will fall and the king of Lydia took this to mean, oh, I'm gonna win because an empire is going to fall. So, he did go to war and an empire did fall. But the Empire was his. So, the oracle was not inaccurate, right? Yeah, it was not false. You just had to interpret it further.
Blomster: Ifeoma Ajunwa is an award-winning scholar and assistant professor of Labor and Employment Law at Cornell University's ILR School. You can look forward to reading her forthcoming scholarly paper, 鈥淭he Paradox of Automation as Anti-Bias Intervention,鈥 in 2020 in the Cardozo Law Review, and she also has a new book coming out, The Quantified Worker, that examines the role of tech in the workplace and its effects on management practices. So, look for that also in 2020. Learn more at ilr.cornell.edu and check out the show notes for links to more information about Ifeoma鈥檚 work and our discussion today.
So, thank you very much for joining us.
Ajunwa: Thank you so much for having me.