Why We Need Diversity Before AI Takes Over

Biases from computer programs will seep into the algorithms behind AI, meaning computers of the future could be just as homogenous as today's tech workforce.

Sarah Thomas, Director, Women in Comms

January 31, 2017

5 Min Read
Why We Need Diversity Before AI Takes Over

We hear a lot about how artificial intelligence (AI) has the potential to displace jobs, especially those held by women in tech, but should we also worry about a future overrun with sexist, racist machines?

It's not hard to envision, unfortunately. If AI is not designed to reflect all types of individuals, but rather only the white men who are writing the algorithms, that might be the scenario we end up with. (See How to Speak Knowledgeably About the AI Threat.)

As we wrap up AI month here on Light Reading, it's worth exploring what the technology, which gives computers human-like intelligence and rationale, means for women. An excellent feature in Foreign Policy magazine this month got me thinking about what the true threat with AI is, and there seems to be more than one.

First the threat to jobs is real, and it's weighted more heavily towards women. Second, and arguably more concerning, is the damage that can happen when AI infiltrates every aspect of our lives, and it brings harmful stereotypes and biases with it.

First on the jobs front: The World Economic Forum predicts that 5.1 million positions worldwide will be lost by 2020, hitting women the hardest. Men will face nearly 4 million job losses and 1.4 million gains, while women will have 3 million losses for 0.55 gains. This is because AI will displace jobs that women hold at higher rates, such as administrative positions, and because it'll affect the tech industry where there is already a well documented disparity. (See More Women in Tech Is Critically Important and A Vast Valley: Tech's Inexcusable Gender Gap.)

Second, and less explored, is what these computers will look like. Like the tech industry at large, the field of AI is dominated by white males. AI learns from humans -- these white, male humans. If human biases, whether unconscious or deliberate, make their way into algorithms, it gets reflected in the robots and programs that result. The machines may be "intelligent," but who cares if they are also racist, sexist and painfully stereotypical?

We've already seen some examples of this happening. Here are just a few:

  • In 2015, Google (Nasdaq: GOOG)'s photo-recognition feature misidentified black faces as gorillas – not on purpose; it was just largely trained on white faces.

  • Snapchat allowed, and later withdrew, two filters that contorted facial features into bucktooth Asian caricatures or to blackface.

  • Microsoft Corp. (Nasdaq: MSFT)'s Millennial chatbot Tay was designed to get "smarter" the more you talk to her, but she was also easily -- and quite quickly -- manipulated to mimic racists tweets, sex chat with users and say charming things like, "gamergate is good and women are inferior." That most definitely wasn't Microsoft's intention, but it also wasn't an outcome they foresaw and planned for.

  • In 2014, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs.

  • Remember all those Pokemon Go you caught during that trend? Most likely, few were in predominantly black communities because its creators didn't happen to spend time in them.

  • An AI system designed by Northpointe to predict the likelihood that an alleged offender will commit another crime in the future was shown to demonstrate a racial bias in its predictions.

  • Most of the robots created so far, as cool as they may be, has also been gendered to the extreme -- either masculine warriors or feminine, often submissive female robot-types.

Many of these may not have been intentional outcomes, but a result of unconscious biases programmed into algorithms that allowed them to happen. As Foreign Policy points out, a sexist Twitter bot is one thing, but imagine a future where AI systems are involved in politics, employment, education, economics and every facet of life as they are projected to be in the next few decades. That's a scary scenario.

Women in Comms' first networking breakfast and panel of 2017 is coming up on Wednesday, March 22, in Denver, Colorado, ahead of day two of the Cable Next-Gen Strategies conference. Register here here to join us for what will be a great morning!

Heather Roff, an artificial intelligence and global security researcher at Arizona State University, tells Foreign Policy that AI can become very dangerous when algorithms start to make decisions for women, showing them only certain ads, job listings and stereotype-reinforcing search results. "[They] will manipulate my beliefs about what I should pursue, what I should leave alone, whether I should want kids, get married, find a job, or merely buy that handbag," she says.

The AI challenge demonstrates yet another reason why it is so important to recognize biases, dispel stereotypes and create machines that mimic the diversity of people in the world. A big way to do this, of course, is to have a diversity of engineers and designers building AI in the first place. Diversity of thoughts and backgrounds can lead to programs and algorithms that are both sensitive and accurate. It's no easy task to solve given the dearth of women and minorities in the field and studying to enter the field, but it's more important than ever.

We have enough sexism and stereotyping in advertising, kids' toys, stock photos and many other parts of society. Let's not build sexist, stereotypical robots and computers as well.

For more from AI month at Light Reading, check out:

— Sarah Thomas, Circle me on Google+ Follow me on TwitterVisit my LinkedIn profile, Director, Women in Comms

About the Author(s)

Sarah Thomas

Director, Women in Comms

Sarah Thomas's love affair with communications began in 2003 when she bought her first cellphone, a pink RAZR, which she duly "bedazzled" with the help of superglue and her dad.

She joined the editorial staff at Light Reading in 2010 and has been covering mobile technologies ever since. Sarah got her start covering telecom in 2007 at Telephony, later Connected Planet, may it rest in peace. Her non-telecom work experience includes a brief foray into public relations at Fleishman-Hillard (her cussin' upset the clients) and a hodge-podge of internships, including spells at Ingram's (Kansas City's business magazine), American Spa magazine (where she was Chief Hot-Tub Correspondent), and the tweens' quiz bible, QuizFest, in NYC.

As Editorial Operations Director, a role she took on in January 2015, Sarah is responsible for the day-to-day management of the non-news content elements on Light Reading.

Sarah received her Bachelor's in Journalism from the University of Missouri-Columbia. She lives in Chicago with her 3DTV, her iPad and a drawer full of smartphone cords.

Away from the world of telecom journalism, Sarah likes to dabble in monster truck racing, becoming part of Team Bigfoot in 2009.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like