Recognizing the bias of artificial intelligence

“We have entered the age of automation — overconfident yet underprepared,” says Joy Buolamwini, in a video describing how commercial facial recognition systems fail to recognize the gender of one in three women of color. The darker the skin, the worse the results.

It’s the kind of bias that is worrying now that artificial intelligence (AI) is used to determine things like who gets a loan, who is likely to get a job and who is shown what on the internet, she says.

Commercial facial recognition systems are sold as accurate and neutral. But few efforts are made to ensure they are ethical, inclusive or respectful of human rights and gender equity, before they land in the hands of law enforcement agencies or corporations who may impact your life.

Joy Buolamwini is the founder of the Algorithmic Justice League, an initiative to foster discussion about biases of race and gender, and to develop new practices for technological accountability. Blending research, art and  activism, Buolamwini calls attention to the harmful bias of commercial AI products — what she calls the “coded gaze”. To inform the public and advocate for change, she has testified before the Federal Trade Commission in the United States, served on the European Union’s Global Tech Panel, written op-eds for major news publications and appeared as a keynote speaker at numerous academic, industry and media events.

On websites and in videos, she shares her lived experience and spoken word poetry, about a topic that is more commonly dealt with in dry, technical terms (or not at all).

Ain’t I a Woman?

The “coded gaze” refers to how commercial AI systems can see people in ways that mirror and amplify injustice in society. At the MIT Media Lab’s Center for Civic Media, Buolamwini has researched commercial facial analysis systems, illustrating how gender and racial bias and inaccuracies occur. Flawed and incomplete training data, false assumptions and lack of technical audits are among the numerous problems that lead to heightened risks.

To fight back, the Algorithmic Justice League and the Center on Privacy & Technology at Georgetown Law launched a Safe Face Pledge in December 2018. It’s a series of actionable steps companies can take to ensure facial analysis technology does not harm people. A handful of companies have signed the pledge and many leading AI researchers have indicated support.

It’s one of many initiatives Buolamwini and colleagues are experimenting with to elicit change from big tech companies. So far, she has found that drawing public attention to facial recognition biases has led to measurable reductions in inaccuracies. After Amazon attempted to discredit the findings of her research, leading AI experts fired back in April calling on the company to stop selling its facial recognition technology to law enforcement agencies.

More can be done, she says. “Both accurate and inaccurate use of facial analysis technology to identify a specific individual (facial recognition) or assess an attribute about a person (gender classification or ethnic classification) can lead to violations of civil liberties,” writes Buolamwini on the MIT Media Lab blog on Medium.

She says safeguards to mitigate abuse are needed. “There is still time to shift towards building ethical AI systems that respect our human dignity and rights,” says Buolamwini. “We have agency in shaping the future of AI, but we must act now to bend it towards justice and inclusion.”

How do you feel about facial recognition systems?

  1. Mozilla Internet Health Report 2019 – pricanadaimmigration

    […] and activists are speaking up and creating solutions. Initiatives such as the Safe Face Pledge search for facial analysis technology at the service of the common good. And experts like Joy Buolamwini, […]

  2. nt

    In other words, they want AI to be tweaked to have the "right kind of bias" instead of unimpeded code and data training.

  3. Anonymous

    I just think it's the AI training pack problem, not some people really want to program them as a "Racism" AI.

  4. Digital Literacies Pathway – Sarah Walid's Blog

    […] the internet health report 2019 issued by Mozzila. The article of my choice  has the title of “Recognizing The bias of artificial intelligence”. As per the title entails, the article discusses the problem of inaccuracies and biases of […]

  5. Mark

    AI can be incorporated into smart weapons that can "discriminate" its target(s). However, this technology can fall into wrong hands. For example, terrorists can use this to assasinate goverment officials like the Head of State (the president or prime minister). Even the govermnent can misuse this technology by using this to scour social media sites and pick its target. This is to silence the dissidents and the opponents of the government.

  6. SD Times Blog: Creating a safer Internet - SD Times

    […] loan, who is likely to get a job, and who is shown what on the Internet,” Mozilla explained in a post. As a result, the company is finding more technologies and activists are taking a stand to build […]

  7. Anonymous

    There is no business that has a general legitimate need to use facial recognition. Sale or transfer of such technology/capability should require an individual validated license listing the specific intended use. All other uses should be prosecuted.

  8. Anonymous

    I think there should be mandatory quality requirements for the use of AI in sensitive areas, e.g. law enforcement, financial transactions or recruitment. These quality requirements should be enforced by law and be regularly updated to meet latest standards and should be systematically controlled by standardized tests.
    This could at least help to minimize issues like unfair biases and poorly trained AI and would increase trust and fair competition.

  9. Alex

    These systems are undertrained.
    That means there's a wrench in the machinery, preventing them from being finished before shipping.

  10. Anonymous

    Until its ready its wise not to let it reach the market.