The algorithms and their biases in the viewfinder of the movements anti-racist

Les algorithmes et leurs biais dans le viseur des mouvements anti-racistes

An African-American arrested falsely because of a facial recognition software: this case has reignited the debate on the bias of the artificial intelligence, in full mobilization against racism and police violence.

The case goes back to the beginning of January: Robert Williams was arrested in Detroit and spent 30 hours in detention because a software has judged to be identical to the photo of his driving licence and the picture of a thief watches captured by surveillance cameras. Wrong.

For the american civil liberties Union (ACLU), which filed a complaint in his name on June 24, “although it is the first known case, it is probably not the first person who has been arrested, questioned and falsely on the basis of a false facial recognition”.

To Joy Buolamwini, founder of the activist group Algorithmic Justice League, this case is indicative “of how systemic racism can be encoded and reflected in the artificial intelligence (AI).”

Under pressure from associations such as the powerful ACLU, Microsoft, Amazon and IBM had announced in early June restricting the use of their tools for analysis of faces by the forces of law and order.

The AI is based on learning automated from data inserted by the designer, the machine analysis. If these data are biased, the result is distorted.

A study by the Massachusetts Institute of Technology published in February 2018 has revealed a strong disparity depending on the population groups, with error rates of less than 1% for white men, and up to 35% for black women, in the main face recognition software vetted.

Thermometer and pistol

In a tweet became viral, Nicolas Kayser-Bril, of the NGO Algorithm Watch, shows that faced with images of people holding a forehead thermometer, the program of image analysis “Google Vision” recognized “binoculars” in a hand on white skin, but identified a “pistol” in the hand with black skin.

According to him, this bias was probably due to the fact that the images used in the database, which consisted of black persons were more often associated with violence, than with white people.”

Google has recognized with Algorithm Watch a result of “unacceptable”.

Gold software of this type are legion, and marketed to companies and governments around the world, not only by big names of the tech.

“This makes it very difficult to identify the conditions under which the data set has been collected, the characteristics of these images, the way the algorithm has been trained,” says Seda Gürses, a researcher at the university of technology, Delft, the netherlands.

This multiplicity of actors can reduce costs, but this complexity blurs the tracing and the allocation of responsibilities, according to the researcher.

“A police officer racist may be formed or replaced, while in the case of an algorithm”, decisions in companies are determined by this algorithm, which obeys the criteria are primarily economic, according to dr. Gürses.

A finding that also applies to programs claiming to predict criminal behaviour.

Evidenced by a recent controversy around a software claiming to “predict with 80% accuracy whether a person is a criminal based only on a photo of his face”.

More than 2000 people, including many scientists, have signed a petition asking the publisher Springer Nature not to publish an article dedicated to this technology, developed by several professors from the university of Harriburg Pennsylvania, and defended in this article.

But the editor, who was interviewed by the AFP, has said that he had “never been accepted for publication”.

“The problem is not so much the algorithm as the presuppositions of the researchers, says Nicolas Kayser-Bril, who rejects the approach of “purely technological.” “Even with data sets of excellent quality, we can do nothing if we do not take into account all the social issues behind it. For this it is necessary to work with sociologists, at least.”

“You can’t change the history of racism and sexism, writes Mutale Nkonde, a researcher in artificial intelligence for the universities of Stanford and Harvard. But you can make it so that the algorithm does not become the final decision-maker.”

Share Button