Speaking truth to power has earned Filip Stojanovski enemies before. As program director of the Metamorphosis Foundation in Macedonia, Stojanovski helped create media watchdog Media Fact-Checking Service along with several other projects that support open knowledge and democracy over the Internet. Still, he found it absurd to see sponsored posts on Facebook linking to false claims about him in 2015.
“It was clear to me, this is a propaganda campaign against people who refuse to be silent about problems in this country,” he says. He still doesn’t know who paid for them.
This type of fraud in social media is reaching epidemic proportions worldwide, at least in part because the online advertising economy that underlies much of today’s Internet is terribly broken. Local politics aside, the rise of misinformation discussed under today’s catch-all banner of ‘fake news’ needs to be understood in the context of unhealthy market realities that can reward malicious behavior for profit or political gain.
Most people are getting at least some of their news from social media now. In order to maximize dollars from displaying ads, news feeds and timelines show the content that attracts the most attention. This ends up favoring headlines that scream for reactions (expressed as shares, “likes” and comments). Add to this the ability to boost the visibility of any message by buying an “ad” targeting the people most likely to react (based on interests, behaviors and relationships) and anyone can churn out disinformation at unbelievable rates – and track their success. If only reality were as exciting as fiction…
The range of actors who create false information extend from malicious to simply opportunistic, with both local and global targets. And the types of people who forward, share and spread disinformation (when they are in fact real people, and not bots) have no unifying characteristic. Everyone is susceptible, even if extremists are more prone, perhaps because they are already outraged about a lot of things others do not perceive as fact.
In the United States, disinformation scandals (including the one about pedophiles in a pizzeria affiliated with Hillary Clinton) marred the presidential election in 2016, and questions about what role disinformation played in the election of Donald Trump reverberate today. Russian operatives are major protagonists in this line of inquiry, based on clear evidence that a Kremlin-linked organization, Internet Research Agency, spent hundreds of thousands of dollars to fuel toxic political discourse, before and after the election.
In this case, reality is actually so bizarre you’d think it was fiction.
Russians created dozens of ‘fake’ Facebook pages, like “BlackMattersUS” and “Heart of Texas” that mimic language at different ends of the political spectrum in the United States. By attracting thousands of followers to the pages, they were able to use them to organize real life protests, and once even a protest and a counter protest at the same time.
Many headlines have been devoted Russia vs. the United States, but such behavior is not specific to Russia. In all too many countries – and that’s in democracies as well as in authoritarian states – governments, militaries and political parties are using the Internet to manipulate public opinion at home or abroad under entirely false pretenses. They employ proxies and deploy trolls, bots and other techniques to obscure who they really are.
Macedonians are themselves quite familiar with Russian interference. But their own battles with disinformation stretch back long before the Internet.
Filip Stojanovski believes that decades of government propaganda through various stages of conflict and political transition from socialism to democracy in Macedonia has resulted in jaded citizens. Disinformation is a regular feature of how public opinion is shaped, he says, because mainstream media perform directly in the service of populist parties.
This particular ecosystem for truth, lies and politics, has proved fertile ground for a cottage industry for ‘fake news’ that also made a cameo appearance in the U.S. election.
Investigative journalists in different countries (starting from as early as six months before the U.S. election day) traced the origins of thousands of ‘fake news’ stories to a small town in Macedonia called Veles that used to be known for its porcelain. Young people here have created hundreds of websites with headlines in English designed to rake in digital ad dollars. They produce websites on anything from health and sports to finance and more.
But what they found most lucrative? Stories about Donald Trump. Exploiting the same social media mechanics as described above, Macedonian teenagers were able to make the “attention economy” work for them. Realistically speaking, these are the same dynamics that make Trump the biggest story in mainstream U.S. digital news media. People click, ads pay, more articles are written.
Online misinformation is a major threat to the health of the Internet and all of the societies it touches because of the potential for political disorder, undermining of the truth, hatred and rumors that spread in conflict or disasters, but also because attempted quick fixes by politicians (with or without ulterior motives) may threaten the openness of the Internet.
For example, Germany’s reaction to misinformation and hate speech online was to make social media platforms responsible for taking down unlawful content. Other countries, including Russia and Kenya, have passed laws that follow suit. We should be wary of any solutions that make Facebook, Twitter or any other corporations (or their algorithms) gatekeepers of the Internet.
Instead of quick fixes, we need to take the time to better understand the problem and the kaleidoscope of actors and symptoms. We’re facing a mix of: junk news, computational propaganda, information pollution and low digital literacy.
Numerous people are already working on ways to tackle parts of the problem. Developers and publishers are trying to build more thoughtful and balanced communities around their news. The Credibility Coalition is working on a Web standard to support the detection of less trustworthy or reliable content. Teachers are developing curricula to help their students grapple with misinformation. And social platforms are trying to make political ads more transparent, although with limited effect. These are still early days for many ideas.
Even if efforts like these succeed, many argue that we’ll still have to tackle a bigger Internet health problem: the underlying online advertising and engagement model that rewards abuse, fraud and misinformation. It’s hard to imagine fixing this problem without regulation, radical changes in Internet business models or both.
We also can’t fall into the trap of blaming technology for the global social and economic conditions that lead to polarized political debate, hyper partisan media or any of the other very human factors that contribute to these problems.
That the very tools designed for civic discourse and community building are being abused and undermined, plays precisely into the hands of those who prefer closed societies, fewer facts and a less healthy Internet.
While these problems are big and complex, coming up with solutions is critical to the health of the Internet – and our societies. If we can tackle these problems while still leaving the open, free-speech-friendly nature of the Internet intact, we have the potential to reinvigorate the public sphere. If not, we will be stuck in a very big mess.
That’s the truth.
The Promises, Challenges, and Futures of Media Literacy, Data & Society, 2018
Why education is the only antidote to fake news, Huw Davies, New Statesman, 2018
Real News About Fake News, Nieman Lab
Fake News and Cyber Propaganda: The Use and Abuse of Social Media, TrendMicro, 2017