Arthur J. Villasanta – Fourth Estate Contributor
Redmond, WA, United States (4E) – Microsoft is being blasted for making its racist facial recognition software more racist and patting itself on the back for it.
It announced some major improvements to its fundamentally biased – some say racist — Azure-based Face API facial recognition software. The AI was criticized in a research paper earlier this year for its unacceptably high error rate — 20.8 percent — when attempting to identify the gender of people of color, especially women with darker skin tones.
Unsurprisingly, Microsoft’s AI identified the gender of photos of “lighter male faces” with an error rate of zero percent, said the study. This massive discrepancy led to charges that Microsoft had developed a racist software that can be used by the police and government authorities to oppress people of color.
In January, Microsoft said U.S. Immigration and Customs Enforcement (ICE) will use its Azure Government Cloud service to “process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification.” The announcement led some Microsoft employees to demand the company cancel its contract with ICE, which is under fire for its lead role in separating children of migrant families from their parents.
Strangely, Microsoft didn’t have enough images of black and brown people when it decided to develop the software, and obviously didn’t bother to correct this flaw that affected the end result. In its defense, Microsoft blamed the data it used when building the facial recognition software.
It did admit that facial recognition technologies are “only as good as the data used to train them” but didn’t explain why it still pursued the development despite the lack of photos of people of color. And it announced it’s fixing the problem — or trying to.
A barrier to its progress is that it says it’s working in a “biased society.” One of the industry’s failings is that data generated by a biased society leads to biased results when it comes to training machine learning systems,” said Microsoft senior researcher Hanna Wallach In a blog post.
“We had conversations about different ways to detect bias and operationalize fairness,” wrote Wallach. “We talked about data collection efforts to diversify the training data. We talked about different strategies to internally test our systems before we deploy them.”
She argues this failure was never solely because the tech didn’t work properly for anyone who wasn’t white and male. The problems also doesn’t end with Microsoft getting really good at identifying and gendering black and brown people.
Microsoft, however, said its Face API team have made three major changes. “They expanded and revised training and benchmark datasets; launched new data collection efforts to further improve the training data by focusing specifically on skin tone, gender and age, and improved the classifier to produce higher precision results.”
Microsoft claimed its new fixes reduced the error rates for men and women with darker skin by up to 20 times. For all women, the company said the error rates were reduced by nine times.
Article – All Rights Reserved.
Provided by FeedSyndicate