Why We Publish the Internet Health Report

Later this month, Mozilla will release the second full length Internet Health Report, which looks at where technology is making our lives and the world better — and where it’s making things worse. More often than not, new online technology does both — makes things better and worse. Therefore, it’s critical that we constantly look at what is happening in tech and ask: how do we want this to play out for humanity?

Nothing underscores this tension — and the urgency of thinking things through — more than the rapid growth of artificial intelligence.

Everyone who uses the internet is already interacting with some form of AI, and soon it will make even more headway into our lives. For the most part, this experience is positive — AI recommends our favourite music, dims our lights in the evenings and shows us the quickest or most scenic route to our destinations.

However, AI also comes with huge risks — and they are tricky to unpack and understand.

I spend much of my time thinking about things like this, and still can’t wrap my head around it. Of course, many people are trying. Bill Gates recently compared AI to nuclear power in terms of risks and advantages. We may not really know how to feel about the huge risks we face, but some of the questions we should be asking are becoming clearer.

Who designs the algorithms?

What data do they feed on?

Who is being discriminated against?

Are we being manipulated into product addiction?

Are we making massive centralization of the internet much much worse? (spoiler alert: yes!)

The first spotlight article in this year’s report is entitled “Let’s ask more of AI.” It aims to put these questions out there, make the debate clearer and encourage people to talk with each other about the future we want. However, we need more than questions. We need a clear vision for how we want AI to serve humans and humanity, and of how to talk about and mitigate the risks.

That is why the Internet Health Report is so important. We put it out every April to help us collectively grapple with issues like this. It’s a wide-ranging collection of stories and sample of research that explains the key issues of the moment, from the personal to the global. For example, whether taking a DNA test is a good idea, or how privacy laws in Europe are affecting the internet as a whole.

With the report, our goal is to encourage people to think critically and question the technology in their own lives. We aim to grow public awareness through media coverage on the report that prompts conversations around the dinner table and in the boardroom. And we strive to equip activists with helpful information to better advise and persuade decision makers in government and industry to do the right thing.

By putting people in the center of this equation, we can look at the most beneficial, exciting and uplifting parts of the internet while still recognizing that there is much to be done to achieve the internet we want. Looking at it this way– through the idea of a ‘healthy’ internet –we strive to provide the tools that individuals, companies and governments need to build the internet we want, rather than accept the internet we’ve been given.

On April 24, I invite you to read the report, contribute comments, download it, share it, reproduce and be inspired by it. In celebrating the report’s pending launch, we’re also celebrating the 200+ people that help make it happen — the academics, nonprofits and research centers that work to keep us safer and better informed; the makers and technologists that create safe spaces for us; and the everyday users who identify problems in their online spaces and offer solutions.

Ultimately, the report is by and for you. The reader. The activist. The donor. The partner. The community member. We can’t do this without you.

One response to “Why We Publish the Internet Health Report

  1. SOLUTION FOR ROGUE AI :

    Create AI that are engineered spy on other AI
    to bring back comprehensive presentations for us humans to grapple with. Always be sure your spy AI have proven authorized moderators as a way to insert moral code in to the decision loops to insure against AI propaganda on both sides.

Leave a Reply

Your email address will not be published. Required fields are marked *