How Facebook's new AI plan is working to fix the technology's serious diversity problem
As a whole, the tech world isn't exactly known for diversity or inclusivity. The sector employees a higher percentage of white people than any other private industry, about 83% of tech execs are white, and only about 20% of tech jobs are currently held by women. It only makes sense, then, that the majority of products coming out of Silicon Valley are tailored to white men, including Artificial Intelligence (AI). To address this issue, Facebook engineer Lade Obamehinti is launching a three-part plan that will make AI more inclusive and benefit both the tech industry and its users.
Obamehinti, the leader of technical strategy for Facebook's AR/VR team, came up with the plan after her team started testing a pre-production smart camera meant to focus on the person speaking by having Obamehinti tell it an animated story about food. Yet as she talked, the AI camera "zoomed in on my white, male colleague instead of me," the engineer recalled while speaking at the second day of the F8 2019 conference in May.
Obamehinti used this experience to improve the AI software's data sets with various genders, ages, and skin tones to ensure no one is erased when it comes to smart video. And later, she came up with her three-part plan, which will combine Facebook's user studies, algorithm development, and system validation to ensure that the company's AI (which is currently used by the site to remove spam, fake accounts, and propaganda) is useful for all people, not just white men.
While Obamehinti's plan will hopefully build more inclusive products for Facebook, the entire AI world could benefit from an overhaul. Since AI is a human-programmed computer intelligence that only knows what it's been taught and can't use human thought processes like empathy to broaden its perspective, it has a tendency to be discriminatory. When Microsoft created a chatbot in 2016, for instance, many Twitter users found that when they told the bot racist or sexist things, it repeated the comments back or came up with its own, similar versions. Then there's the AI system launched by the Chinese government earlier this year, which is being used to profile minorities like Muslims.
As robotics and AI become more widely used by both governments and individuals, the implications concerning overreaching, dangerous, and biased the technology cannot be ignored. Whether we recognize it or not, many of us use AI regularly, from digital assistants like Siri and Alexa to facial recognition features on smartphones to algorithms on Spotify and Netflix that predict what we want next. As it grows increasingly prevalent, AI needs to reflect the diversity of its users and ensure that racist and exclusive experiences, like the one from Obamehinti's product test, don't continue to occur.
The first step to making this happen? Hiring more tech employees that aren't white men. "There is a lack of diversity and inclusivity in the tech industry as a whole, which explains the resulting lack of diversity in AI. The first step is to work on making the tech space more diverse," says Pilar Johnson, Chief of Staff for mobile tech company Prolific Interactive. "Diversity is ever-evolving. There are so many groups that need attention in the tech space."
Poor representation, Johnson notes, not only leads to tech businesses creating products that exclude some consumers, but limits creation across the board. "We need to look beyond hiring and develop diversity objectives in sales, marketing, and networking," Johnson explains. "When you diversify who you’re working and networking with, you’re bringing fresh perspectives to the table so it becomes a win-win for everybody."
"Diversity is ever-evolving. There are so many groups that need attention in the tech space."
With a more representative tech world — and people like Obamehinti taking the lead to change things for the better — AI can become a more equal experience for all. Caroline Sinders, machine learning designer, artist, and Mozilla Fellow, has a suggestion on how to start: "When designing to make AI less harmful, racist, misogynistic, it's always important to think about what you are asking the AI system to do, what kind of data does it have, and then how does it harm consumers," she explains. Harm, she notes, can be seen in surveilling certain populations, but also in accuracy bias. For example, is an algorithm suggesting trees be planted in wealthier and whiter neighborhoods, rather than more diverse ones? If so, creators should ask themselves, "What is that harm? And how could we correct it?", says Sinders, and make any appropriate changes needed to ensure their AI isn't discriminating.
Beyond the clear ethics of inclusionary tech, the concept of diversity is simply good for business — an estimated $400 billion can be gained in tech if the industry ups its diversity efforts and works to build a more representative workforce. As AI and other forms of tech become even more ingrained in our everyday lives, more engineers like Obamehinti need to be making inclusivity a priority when designing and testing their products.