We hear a lot about how artificial intelligence (AI) has the potential to displace jobs, especially those held by women in tech, but should we also worry about a future overrun with sexist, racist machines?
It's not hard to envision, unfortunately. If AI is not designed to reflect all types of individuals, but rather only the white men who are writing the algorithms, that might be the scenario we end up with. (See How to Speak Knowledgeably About the AI Threat.)
As we wrap up AI month here on Light Reading, it's worth exploring what the technology, which gives computers human-like intelligence and rationale, means for women. An excellent feature in Foreign Policy magazine this month got me thinking about what the true threat with AI is, and there seems to be more than one.
First the threat to jobs is real, and it's weighted more heavily towards women. Second, and arguably more concerning, is the damage that can happen when AI infiltrates every aspect of our lives, and it brings harmful stereotypes and biases with it.
First on the jobs front: The World Economic Forum predicts that 5.1 million positions worldwide will be lost by 2020, hitting women the hardest. Men will face nearly 4 million job losses and 1.4 million gains, while women will have 3 million losses for 0.55 gains. This is because AI will displace jobs that women hold at higher rates, such as administrative positions, and because it'll affect the tech industry where there is already a well documented disparity. (See More Women in Tech Is Critically Important and A Vast Valley: Tech's Inexcusable Gender Gap.)
Second, and less explored, is what these computers will look like. Like the tech industry at large, the field of AI is dominated by white males. AI learns from humans -- these white, male humans. If human biases, whether unconscious or deliberate, make their way into algorithms, it gets reflected in the robots and programs that result. The machines may be "intelligent," but who cares if they are also racist, sexist and painfully stereotypical?
We've already seen some examples of this happening. Here are just a few:
In 2015, Google (Nasdaq: GOOG)'s photo-recognition feature misidentified black faces as gorillas – not on purpose; it was just largely trained on white faces.
Snapchat allowed, and later withdrew, two filters that contorted facial features into bucktooth Asian caricatures or to blackface.
Microsoft Corp. (Nasdaq: MSFT)'s Millennial chatbot Tay was designed to get "smarter" the more you talk to her, but she was also easily -- and quite quickly -- manipulated to mimic racists tweets, sex chat with users and say charming things like, "gamergate is good and women are inferior." That most definitely wasn't Microsoft's intention, but it also wasn't an outcome they foresaw and planned for.
In 2014, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs.
Remember all those Pokemon Go you caught during that trend? Most likely, few were in predominantly black communities because its creators didn't happen to spend time in them.
An AI system designed by Northpointe to predict the likelihood that an alleged offender will commit another crime in the future was shown to demonstrate a racial bias in its predictions.
Many of these may not have been intentional outcomes, but a result of unconscious biases programmed into algorithms that allowed them to happen. As Foreign Policy points out, a sexist Twitter bot is one thing, but imagine a future where AI systems are involved in politics, employment, education, economics and every facet of life as they are projected to be in the next few decades. That's a scary scenario.
Women in Comms' first networking breakfast and panel of 2017 is coming up on Wednesday, March 22, in Denver, Colorado, ahead of day two of the Cable Next-Gen Strategies conference. Register here here to join us for what will be a great morning!
Heather Roff, an artificial intelligence and global security researcher at Arizona State University, tells Foreign Policy that AI can become very dangerous when algorithms start to make decisions for women, showing them only certain ads, job listings and stereotype-reinforcing search results. "[They] will manipulate my beliefs about what I should pursue, what I should leave alone, whether I should want kids, get married, find a job, or merely buy that handbag," she says.
The AI challenge demonstrates yet another reason why it is so important to recognize biases, dispel stereotypes and create machines that mimic the diversity of people in the world. A big way to do this, of course, is to have a diversity of engineers and designers building AI in the first place. Diversity of thoughts and backgrounds can lead to programs and algorithms that are both sensitive and accurate. It's no easy task to solve given the dearth of women and minorities in the field and studying to enter the field, but it's more important than ever.
We have enough sexism and stereotyping in advertising, kids' toys, stock photos and many other parts of society. Let's not build sexist, stereotypical robots and computers as well.
For more from AI month at Light Reading, check out:
— Sarah Thomas,

, Director, Women in Comms