Those damned SJWs are ruining AI development, too!
Forcefeeding the IT engineers their PC agenda!
To put it shortly: some learning algorithms have been noticed to act in discriminatory ways. The article suggests that since the engineers are mostly white men the data the algorithms are fed is unintentionally skewed. Of course, in some cases the algorithms can be themselves flawed but the lack of diversity is something that should be considered seriously.
An interesting example is Nikon. Nikon designs its software in Japan so it's not a white guy problem. I guess it could be that the algorithm has learned to recognize typical Japanese face and eye shape better than a continental East-Asian shapes. Anyway, I digress.
Since the algorithms and AIs influence our lives more all the time this seems like something that should be taken into account and at least do more research on it. Even if the algorithm itself is designed by a homogenic group the people who are involved in feeding data to it should be more diverse and be trained to recognize potentially discriminatory consequences. We should try to fight existing problems instead of exacerbating them.
Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.
Take a small example from last year: Users discovered that Google’s photo app, which applies automatic labels to pictures in digital photo albums, was classifying images of black people as gorillas. Google apologized; it was unintentional.
But similar errors have emerged in Nikon’s camera software, which misread images of Asian people as blinking, and in Hewlett-Packard’s web camera software, which had difficulty recognizing people with dark skin tones.
This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.
A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.