Spotlight: Let’s ask more of AI

IHR2019_spotlight_AI-2-3

Stefania Druga from Romania teaches artificial intelligence (AI) programming to children. As a researcher, she has also studied how 450 children in seven countries interact with and perceive connected toys and home assistants, like Amazon Alexa or Google Home.

Children can understand more than parents think, she says –– including that machine learning is limited by what training data you have to work with.

The philosophy behind the software she developed for teaching is that if children are given the opportunity for agency in their relationship with “smart” technologies, they can actively decide how they would like them to behave. Children gather data and teach their computers.

This simple approach is what we urgently need to replicate in other realms of society.

In order to navigate what implications AI has for humanity, we need to understand it –– and then decide what we want it to do. Use of AI is skyrocketing (for fun, as well as for governance, military and business) and not nearly enough attention is paid to the associated risks.

“Yup, it’s probably AI,” says Karen Hao’s back of the-envelope-explainer about any technologies that can listen, speak, read, move and reason. Without necessarily being aware of it, anybody who uses the internet today is already interacting with some form of AI automation.

Thought of simply, machine learning and AI technologies are just the next generation of computing. They enable more powerful automation, prediction and personalization.

These technologies represent such a fundamental shift in what is possible with networked computers that they will soon likely make even more headway into our lives.

Whether search engine results, music playlists, or map navigation routes, these processes are far from magical. Humans code “algorithms” which are basically formulas that decide how decisions should be automated based on whatever data is fed into them.

Where it begins to feel magical is when it makes new things possible. This Person Does Not Exist is a good example. If you visit the website and refresh the page, you will be shown an endless array of faces of people who never existed. They are images that are generated at random by a machine learning algorithm based on a database of faces that do exist.

Look closely, and you will spot the errors –– ears that are crooked, hair that doesn’t fall naturally, backgrounds that are blurred. This Cat Does Not Exist is less convincing. The potential exists for either photo generator to improve with additional data and guidance. And the risks that such photos could be used to misrepresent reality also exists, even for such whimsical creations.

In recognition of the dangers of malicious applications of a similar technology, researchers from OpenAI sparked a media storm by announcing they would not release the full version of an AI technology that can automatically write realistic texts, based partly on the content of 8 million web pages. “Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” they wrote, calling it an experiment in “responsible disclosure”.

Such recognition of the faultlines and risks for abuse of AI technologies is too rare. Over the last 10 years, the same large tech companies that control social media and e-commerce, in both the United States and China, have helped shape the AI agenda. Through their ability to gather huge quantities of training data, they can develop even more powerful technology. And they do it at a breakneck pace that seems incompatible with real care for the potential harms and externalities.

Amazon, Microsoft and others have forged ahead with direct sales of facial recognition technology to law enforcement and immigration authorities, even though troubling inaccuracies and serious risks to people of color in the United States have been rigorously documented and defended. Within major internet companies that develop AI technologies, including Amazon and Google, employees have sounded alarms over ethical concerns more urgently.

Company leaders deflect with confidence in their business models, hubris about their accuracy, and what appears to be ignorance or lack of care for the huge risks. Several companies,  including Axxon, Salesforce, and Facebook, have sought to allay concerns over controversies by creating ethics boards that are meant to oversee decisions.

Co-founder of the research institute, AI Now, Meredith Whittaker, calls this “ethics theater” and says there is no evidence that product decisions are run by them, or that they have any actual veto power. In an interview with Recode, Whittaker asked of the companies, “Are you going to harm humanity and, specifically, historically marginalized populations, or are you going to sort of get your act together and make some significant structural changes to ensure that what you create is safe and not harmful?”

As it happens, Google’s announcement of an ethics board backfired spectacularly in April 2019 and was dismantled after employee protests and public outrage about who had (and hadn’t) been asked to join. While the company has been vocal about establishing principles for AI, and has engaged in social good projects, it also has competing priorities across its many ventures.

What are real world ethical challenges these boards could tackle if they took Whittaker’s advice? One idea would be to question an everyday function billions of people are affected by. Google’s video platform, YouTube, is often said to be a “rabbit hole” –– endless tunnels leading from one video to another. Though YouTube denies it, research shows that content recommendation algorithms are fueling a crisis of disinformation and cultish behavior about vaccines, cancer, gender discrimination, terrorism, conspiracy theories and [add your topic].

Similarly, Pinterest and Amazon are also platforms that drive engagement by learning and suggesting new and engaging content. They experience variations of the same problem. In response to public scandals, they have each announced efforts to stop anti-vaccine content, but there is little evidence of any real change in the basic intention or function of these systems.

But it’s not just technology companies that need to be interrogating the ethics of how they use AI. It’s everyone, from city and government agencies to banks and insurers.

At the borders of nine European Union countries, an AI lie detector was tested to screen travelers. Systems to determine creditworthiness are being rolled out to populations in emerging markets in Africa and Asia. In the United States, health insurers are accessing social media data to help inform decisions about who should have access to what health care. AI has even been used to decide who should and shouldn’t be kept in prison in the United States.

Are these implementations of AI ethical? Do they respect human rights? China, famously, has begun scoring citizens through a social credit system. Chinese authorities are now also systematically targeting an oppressed minority through surveillance with facial recognition systems.

Where do we draw the line?

There are basically two distinct challenges for the world right now. We need to fix what we know we are doing wrong. And we need to decide what it even means for AI to be good.

Cutting humans out of government and business processes can make them more efficient and save costs, but sometimes too much is lost in the bargain.

Too rarely, do people ask, should we do this? Does it even work? It’s worth questioning whether AI should ever be used to make predictions, or whether we should so freely allow it into our homes.

Some of the most worst missteps have involved training data that is faulty or simply used with no recognition of the serious biases that influenced its collection and analysis.

For instance, some automated systems that screen job applicants consistently give women negative scores, because the data shows it’s a field currently dominated by men.

“The categories of data collection matter deeply, especially when dividing people into groups,” say the authors of the book Data Feminism, which explores how data-driven decisions will only amplify inequality unless conscious steps are taken to mitigate the risks.

It seems that if we leave it up to the nine big companies that dominate the field of AI alone, we raise the spectre of a corporate controlled world of surveillance and conformity –– especially so long as gender, ethnic and global diversity is also lacking among their ranks of employees at all levels of a company. Having engineers, ethicists and human rights experts address collaboratively how AI should work increases the chance for better outcomes for humanity.

We are merely at the beginning of articulating a clear and compelling narrative of the future we want.

Over the past years, a movement to better understand the challenges that AI presents to the world has begun to take root. Digital rights specialists, technologists, journalists and researchers around the globe have in different ways urged companies, governments, military and law enforcement agencies to acknowledge the ethical quandaries, inaccuracies and risks.

Each and everyone of us who cares about the health of the internet –– we need to scale up our understanding of AI. It is being woven into nearly every kind of digital product and is being applied to more and more decisions that affect people around the world. For our common understanding to evolve, we need to share what we learn. In classrooms, Stefania Druga is making a small dent by working with groups of children. In Finland, a grand initiative sought to train 1% of the country’s population (55,000 people) in the elements of AI. What will you do?

How can we as a public make choices and engage with technology we don’t yet understand?

  1. en iyi arama motoru

    google en iyi arama motorudur

  2. Mozilla's 2019 Web Well being Report - 4liferecordz

    […] the Report’s three highlight articles, we unpack three huge points: One examines the necessity for higher machine resolution making — that’s, asking questions like Who designs the algorithms? and What information do they […]

  3. formación - TECH

    […] formación formación formación formación formación formación formación formación formaciónformación […]

  4. Mozilla’s 2019 Internet Health Report – Nabil Tech

    […] the three featured articles of the Report, we unpack three major topics: One examines the need for better machine decision making – that is, to ask questions like Who designs the algorithms? and What data do they feed? and […]

  5. Mozilla Internet Health Report 2019 - ipremail

    […] the Report’s three featured articles, we break down three broad themes: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? Y What data do they feed on? Y Who […]

  6. Mozilla Internet Health Report 2019 – pricanadaimmigration

    […] the Report’s three featured articles, we break down three broad themes: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? Y What data do they feed on? Y Who […]

  7. Mozilla's 2019 Internet Health Report - columbusinline

    […] the Report’s three spotlight articles, we unpack three big issues: One examines the need for better machine decision making – that is, asking questions like Who designs the algorithms? and What data do they feed on? […]

  8. Mozilla's Internet Health Report 2019 - cialistadalafilgeneric

    […] the report’s three spotlight articles, we unpack three major issues: One investigates the need for better machine decision making – that is, ask questions like: Who designs the algorithms? and What data do they feed on? and […]

  9. Array - ivermectin365.com

    […] Array Array Array Array Array Array Array Array ArrayArray […]

  10. Mozilla Internet Health Report 2019 -

    […] the Report’s three featured articles, we break down three broad themes: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? Y What data do they feed on? Y Who […]

  11. Mozilla's 2019 Internet Health Report - Global Thchnology

    […] the Report’s three spotlight articles, we unpack three big issues: One examines the need for better machine decision making – that is, asking questions like Who designs the algorithms? and What data do they feed on? and […]

  12. Mozilla Internet Health Report 2019 - CanaDianeDrugsStore

    […] the three key articles of the report, we unpack three main issues: the first examines the need for better machine decision making – i.e. ask questions like Who designs the algorithms? and What data do they feed on? and Who […]

  13. Έκθεση Υγείας του Διαδικτύου 2019 της Mozilla - σύνοψη ημερήσιων ειδήσεων

    […] αποσυσκευάζουμε τρία μεγάλα ζητήματα: Το ένα εξετάζει την ανάγκη για καλύτερη λήψη αποφάσεων από τη μηχανή — δηλαδή κάνοντας ερωτήσεις όπως Ποιος σχεδιάζει […]

  14. Mozilla's 2019 Internet Health Report - hyperwebguide

    […] the Report’s three spotlight articles, we unpack three big issues: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? and What data do they feed on? and […]

  15. Mozilla Internet Health Report 2019 - synthroidotc

    […] the Report’s three featured articles, we break down three broad themes: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? Y What data do they feed on? Y Who […]

  16. Mozilla Internet Health 2019 Report - columbusinline

    […] the three featured articles of the Report, we unpack three major issues: One examines the need for better machine decision making – that is, ask questions like Who designs the algorithms? And What data do they feed on? And […]

  17. Mozilla Internet Health Report 2019 -

    […] the three key articles of the report, we unpack three main issues: the first examines the need for better machine decision making – i.e. ask questions like Who designs the algorithms? and What data do they feed on? and Who […]

  18. Mozilla’s 2019 Internet Health Report

    […] the Report’s three spotlight articles, we unpack three big issues: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? and What data do they feed on? and […]

  19. Request for comment: how to collaboratively make trustworthy AI a reality - The Mozilla Blog

    […] AI, especially as it relates to consumer internet technology. A significant portion of our 2019 Internet Health Report was dedicated to AI issues. We ran campaigns to: pressure platforms like YouTube to make sure their content recommendations […]

  20. Request for comment: how to collaboratively make trustworthy AI a reality - Mark Surman

    […] AI, especially as it relates to consumer internet technology. A significant portion of our 2019 Internet Health Report was dedicated to AI issues. We ran campaigns to: pressure platforms like YouTube to make sure their content recommendations […]

  21. Paparuss

    Here is how I deal with AI. As soon as the machine answers the phone I say agent, or representative, or person. Then the machine says "Let's see if I can help you with that." So I repeat the word agent, etc. Then the machine says, "OK, but first tell me what you question is." Then, kind of loud, I say AGENT, etc. It may take four or five times yelling Agent at the damn machine, but I finally get to talk to a real live person. Yes, I'm over 65 and a bit of a Luddite.

  22. Blaster84x

    I think that AI itself is not a threat, and it shouldn't be regulated by anyone. The REAL big thing is monopoly or oligopoly, and other ways of decreasing competition = better alternatives. OpenAI's model of only releasing the code after "careful consideration" should be replaced by simply releasing an equally good detector AI together with every deepfake-producing one.

  23. You shall not pass – Gedachtengangen

    […] Mozilla, heb ik pas later in mijn zoektocht ontdekt, in zijn – of beter gezegd haar – Internet Health Report met een sectie over AI. Hierover later […]

  24. Kai

    I'm really surprised there's no mention of European AI Alliance and the AI Ethics Guidelines it released in April.

  25. Jerry GaMarsh

    Steven Hawking said it best: "uncontrolled AI is a threat to mankind"

  26. BinderL

    what it even means for AI to be good is about what is good for human? This question is an old one and everyone have is own answer (speak about culture, family values, society values ..., religions ...) We need to consider what is technology when thinking about AI.
    For exemple Technology gives people way to do old think like cutting tree (fashionable right now). This device could cut tree by exploring with AI and give the decision to cut in this way or an other way. So it can select and give a way to do it. But this way depends on what we consider into the model. It is influenced by how human see the tree. We call it evolution and it's a dawn good things. Device adapts people mind. Our way of seeing evolve with technology.
    But in the same way the old question is the same. At this time should we cut tree?
    So technical systems, like IA is not good or bad it only change our way to interact with things mediately. We should pay attention about our way of doing economy (speak about energy, raw material ...) instead of thinking about technology good or bad because as Jacques Ellul explains technology is independent of human will.

  27. Clara A Johnson

    The service that you are providing is invaluable. Thank-you

  28. Michael

    Machine learning and AI use historical data and, as such, they only serve to exacerbate who we are, or even, who we were. One of the great capacities of human beings is our ability to choose. Despite what has been done in the past, we can choose to do things differently. By making predictions and recomdations based on historical data, AI seems to take this choice away from us and dooms us to be who we were rather than who we want to be.

    Like with anything else, if people do not have the means, or even the will, to protect themselves, it is the role of governemnt to step in and protect them.

  29. Rawk

    We can do something really radical and say no to these companies, it would be very hard to stop using their products, but it's about time we as a public play a role and have a say in the technologies that are being created and then sold to us, ignorance on tech needs to stop it was fun while it lasted now it's getting out of hand. thank you Mozilla

  30. Jamshed

    All new technologies start with the best of objectives but get distorted when they proliferate and can get misused by lumpen elements. A classic example is nuclear technology. It has it's good points but one now sees a lot of misuse of it by companies, heads of state, criminal organisations etc. The crying need in implementing new sciences is to create a world body of experts, with wide reaching powers internationally, to stop exploitation of these sciences. Whether politicians will be able to do so is a very moot point. I for one is highly sceptical that this will happen.