When dozens of people fell gravely ill from eating romaine lettuce in 2018, public health authorities in the United States and Canada could not figure out where the E. coli contaminated leaves were farmed. The lettuce had changed hands so many times from washing, chopping, packing to shelving that they couldn’t retrace the steps. The only option was to temporarily declare all romaine lettuce, from any source, unsafe.
It’s a stretch of the imagination, but let’s compare that to what we are experiencing in the world of “personalised” or “targeted” digital ads.
We have absolutely no idea of the ingredients that go into the daily bread of the internet. The ads we are served as we use mobile apps and browse the Web are like lettuce leaves scattered over the planet — they can be healthy — but information about the supply chain is muddled and we have no way to understand what is happening.
Pretty much everything we do when we interact with the internet can be tracked by someone (or something) without our knowledge. From the websites we visit, to the apps on our phones, to the things we write in emails or say to voice assistants. We have no way of knowing how this big salad of data may be combined by different companies with information that uniquely identifies us.
It appears that collecting data about everything and anything we do is of commercial interest to someone, whether app developers, insurance agents, data brokers, hackers or scammers. The lines have been blurred between what’s public and private information. Your credit card may share a list of what you buy in stores with Google. Your online dating profile has perhaps been copied and resold. Why is this?
Not all data about you is used to sell ads, but it is primarily because of the ad-driven internet economy that data has become such a hot commodity. It is why people now speak of surveillance capitalism and the attention economy. The phrase “You are the product” precedes the internet, but has gained new currency as a way to explain how so much online can be “free”. Personal data may seem like a small price to pay. But the added social tax is now mounting threats to freedom and human rights.
To talk about the positives: Digital ads have been a boon to the global economy. Free online services have driven the uptake of mobile internet around the world. Ads have helped publishers and startups monetize their online content and services.
For some of the most powerful companies of the internet, Google, Facebook and Baidu, ads are a primary source of revenue even as they have expanded their business into multiple directions and geographies. For Google and Facebook, especially, access to data is a source of global market power and leverage for business negotiations. For the first time, in the United States, digital ad spending is bigger than for print and television.
The ad-tech industry is vast, but by some estimates Facebook and Google alone controlled around 84% of the global digital ad market in 2018 outside of China. To succeed, they have developed product design practices which are centered on holding the user’s attention and maximizing engagement to drive revenue from ads.
Targeted ads for the most part promote run of the mill products and services, but these same tools can just as easily be exploited by people with criminal or hateful intentions. In a few minutes, you can place content on videos in YouTube, news feeds of Twitter and Facebook and search results of Google. By selecting what demographic to target, advertisers on some platforms have been spotted excluding people of a certain race or gender from housing or job ads. Or in the case of Facebook, even directly targeting “affinity groups” like “Jew Haters” (yes, really). Facebook said its categories are created by algorithms, and when confronted said it would make changes, but it begs the question of how much data should be collected and what it should ever be used for.
Your data profile is a sandwich of data that you knowingly or unknowingly share, which is interpreted by secret algorithms that make use of statistical correlations. For instance, searching online for “loan payment” might say something about your finances. And if you “like” articles or join Facebook groups that could help define your affinities.
“Ads can be done in a more privacy friendly way. But publicly-traded corporations have a duty to maximize shareholder profits, which for some companies means squeezing every drop of data out of their users,” says Casey Oppenheim, the CEO of Disconnect, an online privacy tool that blocks trackers and helps guard personal information from prying technologies.
The journey to a comparison with a public health crisis (remember the lettuce?) is in no small part due to the fact that the ad-tech industry, despite a focus on “better ads”, has neglected privacy for years and still faces accusations of skirting privacy and consent today. Even the supposed accuracy with which the value of an ad purchase can be seen is a myth. It’s an open secret that a huge portion of the internet traffic directed to ads is actually from bots and not humans. An estimated $6.5 billion USD are lost to fraud by advertisers globally in 2017 because of websites that cash in from using bots to inflate numbers.
Many advertisers are angry and have demanded more transparency in the supply chain. “Silicon Valley has created a fetish around automation,” says Rory Sutherland. He is the vice chairman of the advertising agency Ogilvy in the United Kingdom, and says an obsession with measuring results of targeting has led to a decline in the quality of ads compared with traditional mass media marketing. “The obsession with targeting means what you are rewarding is your algorithm’s facility at identifying a customer,” he says. He compares it to walking into a pub with a piece of paper that says, “Drink beer!” Most people are already there to drink beer, he says. “What about the people outside?”
In 2017, a number of major marketers stopped placing ads on YouTube after a slew of scandals over ads on violent and inappropriate videos. For the general global public it can be jarring to see such content monetized. It adds to the sneaking sense of discomfort that is growing among many internet users for every report of breached data, security flaws, and too-far-reaching data sharing agreements with other companies. Can we really trust these companies with our data?
As internet users we may have more ‘awareness’ about privacy, but still no clear sense of what to do. We are deeply dependent on companies we wish would protect us.
In a restaurant, a food and safety inspector has a checklist of things to look for that may be a danger to public health. The Corporate Accountability Index of the organization Ranking Digital Rights is a kind of checklist too — but a complex one that ranks what the biggest internet and telecom companies disclose about how they protect the privacy and freedom of expression of users. By publicly scoring companies — and none scores high — the small but influential organization creates an incentive for companies to improve year over year, and a method to track noticeable progress and setbacks over time.
Nathalie Maréchal is a senior research analyst with Ranking Digital Rights in Washington D.C. She is leading an open consultation process to create entirely new indicators for the index related to targeted advertising. “We need to decide together, what standards for disclosure and good practice should be used to hold these companies accountable,” she says. Ranking Digital Rights’ current ideas for best practices will sound familiar to many internet researchers and digital rights organizations. Among other things, they suggest companies should allow third-party oversight of the parameters for ads (eg. “affinities”) and of who is paying for them. And that companies should state rules for prohibited content and use of bots — and publish data regularly to show how they are enforced.
Such tools and practices have begun to emerge out of companies already. Not of their own initiative, but compelled either by regulations or public pressure. This year, Facebook says they will roll out political ad transparency tools globally by June. In 2018, Google say they killed over two billion “bad ads”. And Facebook took steps to remove 5,000 ad categories to prevent discrimination. Twitter began collecting more personal data in 2017, but now also gives you to control to change how they categorize you.
Data privacy regulations are improving in numerous countries and states, and courts and civil society are taking companies to task around the world on matters of data collection and consent for targeted advertising. Regulation helps!
And so does technology. To protect the security of users, most major browsers have introduced different variations of tracking protection (and sometimes also ad blocking). Total or partial ad blocking by different companies in different configurations has gone fully mainstream with hundreds of millions of users. It makes the Web faster, and batteries last longer.
Coming back to the lettuce. What would the equivalent of “farm to table” in food activism be for digital ads? Perhaps we would see who paid for ads, understand why we are targeted, and have control over who is collecting our data for what.
What really needs rethinking today is the notion that digital ads can only be effective when they are targeted, and when companies know everything about everyone. Many brands and marketers are backing away from this idea for lack of evidence. Unless internet companies are able to regain our trust by changing practices (or perhaps be legally compelled to protect our secrets and interests, like doctors and lawyers), we can invest some hope in a new generation of software initiatives that explore decentralized solutions to give people personal control over who has access to their data.
“I spent 10 years working with an environmental health organization and I have always seen parallels to the privacy world,” says Oppenheim. “Just like we can connect people to the values of the food they eat, we can also connect them to the value of their data.”