In the science fiction stories of the early 20th century, artificial intelligence often appeared as hardware: hulking, clanking contraptions of brass and steel.

Decades later, life hasn’t quite imitated art. The AI we coexist with today is often housed in the dashboards of our cars or voice-controlled speakers like the Amazon Echo. Or it is tucked out of sight completely, optimizing our web searches, detecting fraudulent insurance claims and plying us with online ads.

Today’s AI does, however, imitate a different idea from science fiction: that intelligent machines might not always be used for noble purposes, or do precisely as humans expect.

AI is already in the process of revolutionizing medicine, business and transportation. But it can also promote misinformation and perpetuate bias, even when designed with the best of intentions.

Meredith Whittaker is the co-founder of AI Now Institute and a research scientist at New York University, where she examines the social implications of AI. She says one of the biggest problems with today’s AI ecosystem is the resources required to enter the field. The vast majority of the technology is created and controlled by a small cadre of powerful companies, like Google, Amazon and Facebook from the United States, or TenCent and Baidu in China.

Meredith Whittaker
Meredith Whittaker. Photo by Justine Suzanne Jones (CC BY-SA 4.0)


“Contrary to the Silicon Valley creation myth, it simply wouldn’t be possible to start an AI company with a computer and a good idea in your garage,” Whittaker says.

Developing artificially intelligent systems requires significant computational power that is not cheap. Further, effectively training these systems requires ongoing access to massive amounts of data – the type available to social media giants or mass market smartphone producers, not small businesses or individual makers.

Then there’s the talent component: people with advanced mathematics and computer science degrees are in high demand. “There’s such competition among tech giants for this small pool of talent that they’re paying football star-level signing bonuses,” Whittaker says.

The bias problem

The fact that a small number of players and a narrow talent pool are responsible for designing and deploying vast AI systems creates conditions for bias, Whittaker explains. When a homogenous group of people – say, American males in Silicon Valley – build technology, they may omit the viewpoints and needs of people outside their realm of experience.

Journalists and researchers have already uncovered a number of troubling examples of biased systems. Algorithms and related data sets used by law enforcement across the U.S. routinely misjudge an individual’s likelihood of committing a crime based on race, according to an in-depth investigation by ProPublica in 2016. In Houston, Texas, AI used to evaluate school teachers produced flawed results – and led to a lawsuit against the school district last year. And a spate of studies have proven that facial recognition technology has more difficulty recognizing people of color and women than in recognizing white men.

“Diversity in the tech industry is a huge issue across the board, but existentially important regarding the social and political impacts of AI,” Whittaker says. “We need to worry about the types of control possible when a small, homogeneous set of actors are responsible for technology that is influencing the lives of billions of people,” she adds.

The right way forward

Opening up the AI space can help address many of these challenges.

Creating open and diverse training data can lower the barriers to entry for smaller players considering the AI field. Opening more effective pathways for people of diverse backgrounds to flourish in AI-related careers can broaden the spectrum of voices involved. And making it it easier for everyone, technically savvy or not, to understand the implications of AI can help all of us have a say about the roles these technologies should play in our communities.

Concerned technologists and policymakers – from Whittaker’s AI Now Institute to New York City’s mayor – are already working on this. Indeed, AI Now recently published a framework for Algorithmic Impact Assessments, designed to help core government agencies use and assess automated systems more responsibly.

But Whittaker says this is only a first step. “We need ongoing research, debate and discussion. I don’t think we’re at a point where we can talk about ‘the solution,’ because I don’t think we have a clear understanding of the scope of the problems,” she says.

Whittaker says we need to answer the underlying questions – who is most at risk from biased and inaccurate systems? How can we know whether a system is fair? And, how do we as a public then decide when we should and shouldn’t use AI? Whittaker suggests that domains with high potential for harm, such as healthcare and education, require the most safeguards.

“The industry has spent a significant amount of energy and resources on advancing AI systems and making them marketable and commodifiable,” Whittaker continues. “Very little has gone into understanding the impact in social and political contexts.”

In the short term, Whittaker urges caution. “We might want to think about waiting to use certain AI technology until we have created the infrastructure to understand it,” she says. “There need to be ways to say, ‘We won’t use this system until we have guaranteed that it’s safe.’”