Why We Need Diversity Before AI Takes Over

Sarah Thomas
1/31/2017
50%
50%

We hear a lot about how artificial intelligence (AI) has the potential to displace jobs, especially those held by women in tech, but should we also worry about a future overrun with sexist, racist machines?

It's not hard to envision, unfortunately. If AI is not designed to reflect all types of individuals, but rather only the white men who are writing the algorithms, that might be the scenario we end up with. (See How to Speak Knowledgeably About the AI Threat.)

As we wrap up AI month here on Light Reading, it's worth exploring what the technology, which gives computers human-like intelligence and rationale, means for women. An excellent feature in Foreign Policy magazine this month got me thinking about what the true threat with AI is, and there seems to be more than one.

First the threat to jobs is real, and it's weighted more heavily towards women. Second, and arguably more concerning, is the damage that can happen when AI infiltrates every aspect of our lives, and it brings harmful stereotypes and biases with it.

First on the jobs front: The World Economic Forum predicts that 5.1 million positions worldwide will be lost by 2020, hitting women the hardest. Men will face nearly 4 million job losses and 1.4 million gains, while women will have 3 million losses for 0.55 gains. This is because AI will displace jobs that women hold at higher rates, such as administrative positions, and because it'll affect the tech industry where there is already a well documented disparity. (See More Women in Tech Is Critically Important and A Vast Valley: Tech's Inexcusable Gender Gap.)

Second, and less explored, is what these computers will look like. Like the tech industry at large, the field of AI is dominated by white males. AI learns from humans -- these white, male humans. If human biases, whether unconscious or deliberate, make their way into algorithms, it gets reflected in the robots and programs that result. The machines may be "intelligent," but who cares if they are also racist, sexist and painfully stereotypical?

We've already seen some examples of this happening. Here are just a few:

  • In 2015, Google (Nasdaq: GOOG)'s photo-recognition feature misidentified black faces as gorillas – not on purpose; it was just largely trained on white faces.
  • Snapchat allowed, and later withdrew, two filters that contorted facial features into bucktooth Asian caricatures or to blackface.
  • Microsoft Corp. (Nasdaq: MSFT)'s Millennial chatbot Tay was designed to get "smarter" the more you talk to her, but she was also easily -- and quite quickly -- manipulated to mimic racists tweets, sex chat with users and say charming things like, "gamergate is good and women are inferior." That most definitely wasn't Microsoft's intention, but it also wasn't an outcome they foresaw and planned for.
  • In 2014, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs.
  • Remember all those Pokemon Go you caught during that trend? Most likely, few were in predominantly black communities because its creators didn't happen to spend time in them.
  • An AI system designed by Northpointe to predict the likelihood that an alleged offender will commit another crime in the future was shown to demonstrate a racial bias in its predictions.
  • Most of the robots created so far, as cool as they may be, has also been gendered to the extreme -- either masculine warriors or feminine, often submissive female robot-types.

Many of these may not have been intentional outcomes, but a result of unconscious biases programmed into algorithms that allowed them to happen. As Foreign Policy points out, a sexist Twitter bot is one thing, but imagine a future where AI systems are involved in politics, employment, education, economics and every facet of life as they are projected to be in the next few decades. That's a scary scenario.


Women in Comms' first networking breakfast and panel of 2017 is coming up on Wednesday, March 22, in Denver, Colorado, ahead of day two of the Cable Next-Gen Strategies conference. Register here here to join us for what will be a great morning!


Heather Roff, an artificial intelligence and global security researcher at Arizona State University, tells Foreign Policy that AI can become very dangerous when algorithms start to make decisions for women, showing them only certain ads, job listings and stereotype-reinforcing search results. "[They] will manipulate my beliefs about what I should pursue, what I should leave alone, whether I should want kids, get married, find a job, or merely buy that handbag," she says.

The AI challenge demonstrates yet another reason why it is so important to recognize biases, dispel stereotypes and create machines that mimic the diversity of people in the world. A big way to do this, of course, is to have a diversity of engineers and designers building AI in the first place. Diversity of thoughts and backgrounds can lead to programs and algorithms that are both sensitive and accurate. It's no easy task to solve given the dearth of women and minorities in the field and studying to enter the field, but it's more important than ever.

We have enough sexism and stereotyping in advertising, kids' toys, stock photos and many other parts of society. Let's not build sexist, stereotypical robots and computers as well.

For more from AI month at Light Reading, check out:

— Sarah Thomas, Circle me on Google+ Follow me on TwitterVisit my LinkedIn profile, Director, Women in Comms

(1)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Sarah Thomas
50%
50%
Sarah Thomas,
User Rank: Blogger
1/31/2017 | 10:53:00 AM
Using AI for good
On the flipside of this, you can find examples of how AI has the potential to level the playing field for women. Here's one example of a startup using algorithms to review programmers' code and remove biographical information like age, race, gender, past work and schooling: http://www.bbc.com/news/business-38393802. The idea being that potential recruiters judge candidates based only on their abilities so that unconscious biases can't seep through.

The use cases and implications of AI are far reaching so it'll find its way into all aspects of our economy and daily life, which is why it's so important to make sure it's created with purpose and an accurate world view. 
More Blogs from Que Sera Sarah
Congratulations to AT&T's Anne Chow, CableLab's Dr. Jennifer Andreoli-Fang and Movandi's Maryam Rofougaran, our 2018 WiC Leading Lights winners.
Heavy hitters in cable, open source and optical networking make up the five women who are shortlisted for the Hedy Lamarr-inspired Female Tech Pioneer of the Year category.
Four worthy women from the worlds of cable, the enterprise, data centers and B/OSS make our shortlist for the year's Most Inspiring Woman in Comms.
Libelium and Movandi and their founders are pioneering new technologies that are already changing the game for IoT and 5G, respectively, making them startups you want to keep an eye on in 2018 and beyond.
Ten worthy women and two trailblazing startups made the shortlist this year for Women in Comms' Leading Lights awards. Winners will be unveiled soon at BCE.
Featured Video
Upcoming Live Events
September 17-19, 2019, Dallas, Texas
October 1-2, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events