Computer Programs May Be Learning Human Prejudices

Monday, 17 August 2015 - 9:50AM
Artificial Intelligence
Monday, 17 August 2015 - 9:50AM
Computer Programs May Be Learning Human Prejudices
At first blush, it seems that using a completely objective, mathematical algorithm to scan resumes would eliminate the danger of latent biases coming out in hiring practices. But now, a new study reveals that we may be inadvertently teaching advanced computer programs to share human biases.

In artificial intelligence, the "holy grail" is creating a machine that thinks like a human. And the human brain is capable of many impressive feats, in large part as a result of our ability to make connections and extrapolate from pieces of data. But unfortunately, as computer programs become more advanced and come closer to approximating true artificial intelligence, there is a significant risk that the algorithms will reflect human biases, such as those based on race and gender.

Opening quote
"The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations," lead author Suresh Venkatasubramanian said in a statement.
Closing quote


In the study, a team of computer scientists from University of Utah, University of Arizona, and Haverford College examined algorithms used by companies to screen applicants in the early hiring process. These programs often use machine-learning algorithms, which are specially designed to detect patterns of behavior (similar technology is used to give recommendations on websites like Amazon and Netflix). So not only is there a danger of de facto discrimination, but the programs may actually be directly discriminating against specific people purely based on their race or gender.

Opening quote
"There's a growing industry around doing résumé filtering and résumé scanning to look for job applicants, so there is definitely interest in this," said Venkatasubramanian. "If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair."
Closing quote


According to Venkatasubramanian, they don't have definitive proof that this is happening, but their initial research has at least shown that this discrimination is a risk. In order to test an algorithm for potential bias, he suggests hiding the applicants' demographic information and gauging whether the program can accurately predict their race or gender. If it can, then that would suggest that the program is quite literally stereotyping people, just as a human would.

If a problem is found, then the authors suggest the problem can be fixed by re-distributing the data in the resumes so the algorithm can't pick up on race or gender as a variable. But this may reveal a more structural issue, in which eliminating bias may be much more difficult than originally believed.

Via Gizmodo.
Science
Technology
Artificial Intelligence

Load Comments