Why Diversity Really, Really Matters in AI

Eric Iversen

Here it comes

Among candidates to become the next big thing in STEM, artificial intelligence gets a lot of attention. AI represents in-demand workplace skills, is cresting in popularity among students, and has a rising profile in the media. It gets this attention for all the change it promises to bring to schools, workplaces, and our civic lives.

However, AI, so far, promises little change in one area of vital importance to all these areas: demographics. The field presents many of the same diversity challenges that other tech-intensive fields do, with cultural and structural obstacles to entry and success for women and minorities. A wider problem than equity within the field, though, arises from the lack of diversity in AI. The ever-growing impact of AI-driven products and systems promises, among other things, to extend and reinforce, in countless exchanges between AI systems and individuals, the biases built into them via a demographically homogeneous design system.

To be sure, the promise of AI is enormous. From performing tedious, repetitive tasks to gleaning insights about the world from enormous data sets to standing in, robotically-borne, for humans in physically threatening environments, the range of benefits that we will reap from AI is huge. And we will almost certainly engineer useful applications for AI that we cannot now even imagine. Current applications of AI remain rather narrow, confined to limited, if still data-intensive, tasks like recognizing faces, as opposed to distinguishing faces among cars, birds, and pencils.

The AI Index 2019 Annual Report offers the closest thing to a comprehensive overview of the field currently available.

The AI Index 2019 Annual Report offers the closest thing to a comprehensive overview of the field currently available.

The Stanford AI Index 2019 Annual Report highlights a bounty of trends that reflect the rapid establishment of AI as an economic and technological force. The share of AI-related jobs in the overall workforce has increased by a factor of five in the last 10 years, from 0.26% to 1.32%, with growth only accelerating. Large businesses are integrating AI into their operations at breakneck speed, with 58% doing so in 2019, up from 47% in 2018. And AI has become the most popular area of specialization for PhD students in computer science, with twice as many choosing it over second-place cybersecurity.

A familiar story

The makeup of people doing all of this work in AI, though, suggests the field faces the same kinds of diversity challenges that technology fields in general do. As a report from the AI Now Institute at New York University states, “The current data on the state of gender diversity in AI is dire…. The state of racial diversity is worse.” While data collection in the field struggles with a lack of comprehensive survey instruments, the researchers behind the report homed in on telling indicators:

  • Among active researchers in the field, 18% are female.

  • The AI research staffs at Facebook and Google are 15% and 10% female, respectively.

  • At Google, Facebook, and Microsoft, the workforce is between 2.5% and 4% black, and 3.5% to 6% Hispanic.

These kinds of statistics outline the challenges individual minorities and women can face in the field. Anecdotes illustrate them further: Timnit Gebru is a leading AI researcher at Google and a founder of Black in AI. When she attended the 2016 iteration of NeurIPS, the leading conference in machine learning, she was one of six black people in attendance, out of 8,500 participants.

A systems problem

Laid bare in the AI Now report is “the relationship between the workplace diversity crisis and the problems with bias and discrimination in AI systems.” Because AI design, manufacturing, and marketing operations so predominantly feature white men at the controls, the products and systems that emerge and get deployed in the marketplace can end up reflecting the worldviews and assumptions of these same white men. The report details chilling examples of AI systems deployed in decision rubrics related to housing, employment, health care, and criminal justice. Bias embedded in algorithms driving these AI products and systems has led to discrimination against women and minorities, with results ranging from inconvenient to injurious or worse.

“Data violence”

One researcher has coined the term “data violence” to describe these harms. When AI designers’ chosen data inputs fail to represent the fullest possible range of diverse human realities and attributes, the result can be AI products and systems reproducing the biases, recognized or not, of the people gathering and manipulating the data in the first place.

Ways forward

AI Now authors issue recommendations both for improving workplace diversity and addressing bias built into AI systems. Transparency in hiring and managing employees, a broader net for recruitment, third-party vetting of AI systems, greater visibility into use and deployment – these are just some of the proposed remedies.

Gebru’s organization, Black in AI, has mobilized minorities already in the field to step into more visible roles. She has noted, for example, that over 300 African-Americans attended the most recent NeurIPS meeting. Another activist effort is AI4All, a non-profit originating at Stanford that has launched a nationwide effort to increase diversity and inclusion in the field. In just two years of operation, it has launched programs at 11 research universities and already counts almost 200 alumni.

AI4All has grown quickly to become a model program for broadening participation in AI students at the high school and college levels.

AI4All has grown quickly to become a model program for broadening participation in AI students at the high school and college levels.

The time for action is now

It will clearly take a much broader effort to name, study, and mitigate the diversity problems in AI. Because the impact of AI products and systems can be so wide-reaching and dramatic among large numbers of people’s lives, this effort is urgent. These are still early days in AI, though, a time when change can be easier to implement. And perhaps we have learned some lessons from the diversity problems besetting other, related fields, like computer science and engineering. One hopeful sign – “NeurIPS” is a recently adopted modification of the original name of the meeting, which has attracted an increasingly diverse audience with rapid growth in the field. The old name? Not that different, just missing the “eur” …

 


Eric Iversen is VP for Learning and Communications at Start Engineering. He has written and spoken widely on STEM education and related careers. You can write to him about this topic, especially when he gets stuff wrong, at eiversen@start-engineering.com

You can also follow along on Twitter @StartEnginNow.

Our Cybersecurity Career Guide shows middle and high schoolers what cybersecurity is all about and how they can find the career in the field that’s right for them. Now with a Student Workbook for classroom or afterschool use!

To showcase STEM career options, pair our cybersecurity books with the updated, 2019 edition of our Engineering Career Guide.

We’ve also got appealing, fun engineering posters and engaging books for PreK-2 and K-5.

Our books cover the entire PreK-12 range. Get the one that’s right for you at our online shop.