The social media juggernaut made headlines after users discovered apparent racial bias in its image preview algorithm. The discovery happened after Twitter user Colin Madland used the platform to call out Zoom’s failure to recognize his Black colleagues who used the green screen technology, but in a grand show of irony, he found Twitter’s image-cropping algorithm behaved similarly and deprioritized Black faces. Other users got in on the trend sparking a series of viral tweets showing the algorithm consistently prioritized white and lighter-skinned faces, ranging from people to cartoon characters and even dogs. This failure is indicative of a larger cultural movement in the tech industry that has consistently failed to account for minority groups, which has spilled over into the technical side. “It makes minorities feel terrible, like they’re not important, and it can be used for other things that may cause more serious harm down the line,” Erik Learned-Miller, professor of computer science at the University of Massachusetts, said in a phone interview. “Once you’ve decided what a piece of software can be used for and all the harms that can occur, then we begin talking about the ways to minimize the chance of those happening.”

Canary on the Timeline

Twitter uses neural networks to automatically crop images embedded in tweets. The algorithm is supposed to detect faces to preview, but it appears to have a noticeable white bias. Company spokeswoman Liz Kelley tweeted a response to all the concerns. Kelley tweeted, “thanks to everyone who raised this. we tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. we’ll open source our work so others can review and replicate.” Co-author of the white paper “Facial Recognition Technologies in The Wild: A Call for a Federal Office,” Learned-Miller is a leading researcher on the excesses of face-based AI learning software. He’s been discussing the potential negative impact of image-learning software for years, and has spoken about the importance of creating a reality where these biases are mitigated to the best of their ability. Many algorithms for facial recognition technology use reference sets for data, often known as training sets, which are a collection of images used to fine-tune the behavior of image-learning software. It ultimately allows the AI to readily recognize a wide array of faces. However, these reference sets can lack a diverse pool, leading to issues like those experienced by the Twitter team. “Certainly, it’s a huge issue for any minority, but I think there’s a much broader issue as well,” said Learned-Miller. “It relates to a lack of diversity in the tech sector and the need for a centralized, regulatory force to show the proper usages of this kind of powerful software prone to misuse and abuse.”

Tech Lacking Diversity

Twitter may be the latest tech company on the chopping block, but this is far from a new problem. The tech field remains a predominantly white, perpetually male-dominated field and researchers have found that the lack of diversity causes a replication of systemic, historical imbalances in the developed software.  In a 2019 report by New York University’s AI Now Institute, researchers found that Black people make up less than 6 percent of the workforce at the top tech firms in the country. Similarly, women only account for 26 percent of workers in the field—a statistic lower than their share in 1960. “There are a lot of people who haven’t thought through the issues and don’t really realize how these things can cause harm and how significant these harms are,” Learned-Miller suggested about AI image learning. “Hopefully, that number of people is shrinking!”