The internet is transformative because it is open: everyone can participate and innovate. But openness is not guaranteed – it’s always under attack.
Openness is a foundational pillar of the internet. Today’s digital world exists because people don’t need permission to create for and on the Web.
Yet in 2019, the internet’s openness is as radical ––and as threatened–– as ever.
Governments worldwide continue to restrict internet access in a multitude of ways, ranging from outright censorship to requiring payment of taxes to use social media, to shutting down or slowing down the internet to silence dissent. Powerful lobbyists are winning fights for more restrictive copyright regimes, and big tech platforms lock us in to proprietary systems.
At the same time, the open Web is resilient.
Volunteers of the Wikidata community of Wikimedia have created a data structure that enables content to be read and edited by both humans and machines. Advocates of open data are pushing for more transparency to understand how companies create digital profiles of us and what they do with the data.
But a tension between openness and inclusion persists. Despite many measures taken, hate speech and harassment on online platforms remains an urgent and serious problem.
In Germany, one year after implementation, a law to reduce hate speech online, was neither particularly effective at solving what it set out to do, nor as restrictive as many feared.
Yet the lack of strong evidence isn’t stopping similar regulations from being introduced elsewhere. The European Union is currently debating new rules that would require companies of all sizes to take down ‘terrorist content’ within one hour, or face stiff penalties.
Heightened discussions about artificial intelligence and automated decision making (AI) are also introducing new angles to this debate.
New user-friendly AI tools have made it easier than ever to create deepfakes: media that depict a person saying or doing something they never did. These sort of developments raise a critical question: how do we mitigate the real harms that misuse of a technology could cause, particularly to vulnerable groups, without sacrificing the benefits of the open internet?
Sometimes, the best approach might be to never release it.
OpenAI recently built a language model so good at automatically generating convincing text that they became concerned about it being misused. To mitigate potential harm, the organization decided to release a limited version of the tool. The choice sparked criticism that it was the “opposite of open,” while others praised the decision as a “new bar for ethics.”
Grappling with the challenge of safeguarding the open internet, while building an inclusive digital world, remains a pivotal task for companies, technologists, policy makers and citizens alike.
This is especially true as a new dimension emerges, centered around an urgent question: how do we decide what technologies to build and use at all?