A new European Union law aims to force social media giants including Facebook and YouTube to take steps to tamp down the spread of extremism and disinformation online.
Under the Digital Services Act (DSA), tech companies with more than 45 million users will have to give regulators access to their so-called algorithmic black boxes, revealing more about how certain posts – particularly the divisive ones – end up at the top of social media news feeds.
Why We Wrote This
A new EU law calls on Big Tech companies to open up their algorithmic “black boxes” and better moderate online speech. The goal is no less than preserving the public square on which democracies depend.
And if platforms recognize patterns that are causing harm and fail to act, they will face hefty fines.
“We need to get under the hood of platforms and look at the ways in which they are amplifying and spreading harmful content such as hate speech,” says Joe Westby, deputy director of Amnesty Tech in Brussels. “The DSA is a landmark law trying to hold these Big Tech companies to account.”
The law may end up having considerable effects on how corporations behave even in the United States. “This is a classic example of the ‘Brussels effect’: the idea that when Europe regulates, it ends up having a global impact,” says Brookings Institution expert Alex Engler.
Brussels
Sweeping new European Union legislation seeks to “revolutionize” the internet, forcing social media giants including Facebook and YouTube to take steps to tamp down the spread of extremism and disinformation online.
Known as the Digital Services Act (DSA), it is likely to create ripple effects that could change how social media platforms behave in America, too.
In one of the most striking requirements of the new law, Big Tech companies with more than 45 million users will have to hand over access to their so-called algorithmic black boxes, lending greater clarity to how certain posts – particularly the divisive ones – end up at the top of social media news feeds.
Why We Wrote This
A new EU law calls on Big Tech companies to open up their algorithmic “black boxes” and better moderate online speech. The goal is no less than preserving the public square on which democracies depend.
Companies must also put in place systems designed to speed up how quickly illegal content is pulled from the web, prioritizing requests from “trusted flaggers.”
And if platforms recognize patterns that are causing harm and fail to act, they will face hefty fines.
“We need to get under the hood of platforms and look at the ways in which they are amplifying and spreading harmful content such as hate speech,” says Joe Westby, deputy director of Amnesty Tech at Amnesty International in London.
“The DSA is a landmark law trying to hold these Big Tech companies to account,” he adds.
Unlocking the Big Tech business model
Big Tech companies have long endeavored to shrug off regulation by invoking freedom of speech. The DSA takes the tack that while ugly and divisive speech shouldn’t be policed, neither should it be promoted – or artificially amplified.
But in order to sell ads and collect user data – which they also sell – the big online platforms have been doing precisely this.
The key to this business model is keeping users online for as long as possible, in order to collect as much data about them as possible.
And research has shown that what keeps people reading and clicking is content that makes them mad, notes Jan Penfrat, senior policy adviser at European Digital Rights, a Brussels-based association.
This in turn gives Big Tech companies incentive to prioritize and push out anger-inducing content that provokes users “to react and respond,” he says.
This point was driven home last year through a trove of internal documents made public by whistleblower and former Facebook data engineer Frances Haugen.
In leaked company communications, an employee laments that extremist political parties were celebrating Facebook’s algorithms, because they rewarded their “provocation strategies” on subjects ranging from racism to immigration and the welfare state.
It was one of many examples in those documents of how Facebook’s algorithms appeared to “artificially amplify” hate speech and disinformation.
To endeavor to fix this, the DSA will require Big Tech companies to conduct and publish annual “impact assessments,” which will examine their “ecosystem of users and whether or not – or how – recommendation algorithms direct traffic,” says Peter Chase, senior fellow at the German Marshall Fund in Brussels.
“It’s asking these large platforms to think about the social impact they have.”
There are insights to be had from these sorts of regular exercises, analysts say.
Twitter, which has a reputation for publishing self-critical research, made public an internal evaluation last October that found its own algorithms favor conservative rather than left-leaning political content.
What they couldn’t quite figure out, it admitted, was why.
The DSA aims to provide some clarity on this front by requiring Big Tech companies to open up their algorithmic black boxes to academic researchers approved by the European Commission.
In this way, EU officials hope to glean insights into, among other things, how Big Tech companies moderate and rank social media posts. “On what basis do they recommend certain types of content over others? Hide or demote it?” Mr. Penfrat asks.
And under the law, if Big Tech companies discover patterns of artificial amplification that favor hate speech and disinformation pushed out by bad actors and bots – what social media companies call “coordinated inauthentic behavior” – and don’t take action to stop it, they face devastating fines.
These could run up to 6% of a company’s global annual sales. Repeat offenders could be barred from operating in the EU.
“They have to do something about it, or they can get caught,” says Alex Angler, fellow in Governance Studies at the Brookings Institution.
“So they can’t just shrug their shoulders and say, ‘We don’t have a problem.’”
“Weaponized” ads – and the law’s response
Up until now, such evasiveness is precisely what has characterized Big Tech companies, and analysts say it’s largely because promoting divisive content has been so wildly profitable.
Mr. Penfrat recalls the surprise of EU policymakers he lobbied when he would explain the nearly unfathomable amount of personal data that tech giants commodify – and how they often tap the emotional power of anger through a “surveillance-based” advertising model.
“Every single time you open a website, hundreds of companies are bidding for your eyeballs,” he says. In a matter of “milliseconds,” the ads pushed by data brokers who have won the bid are loaded for web users to view.
But it’s not just goods and services that advertisers are selling. “Anyone can pay Facebook to promote certain types of content – that’s what ads are. It can be political and issues-based,” Mr. Penfrat says.
And bad actors have taken advantage of this, he notes, pointing to how the Russian government “weaponized” ads to push its preferred candidates in U.S. elections and justify war in Ukraine.
The DSA will ban using sensitive data, including race and religion, to target ads, and prohibit ads aimed at children as well. It also makes it illegal to use so-called dark patterns, manipulative practices that trick people into things such as consenting to let online companies track their data.
What’s more, it requires Big Tech companies to speed their processes for taking down illegal posts – including terrorist content, so-called revenge porn, and hate speech in some countries that ban it – in part by prioritizing the recommendations of “trusted flaggers,” which could include nonprofit groups approved by the EU.
Likewise, if companies remove content that they say violates these rules, they must notify people whose posts are taken down, explain why, and have appeals procedures.
“You’ve got these mechanisms today, but they’re very untransparent,” Mr. Penfrat says. “You can appeal but never get a response.”
A European law with U.S. effects
The DSA has been received by data-policy experts with a mix of skepticism as well as praise – with some voicing worry about unintended harm to competition or the diversity of online speech.
Yet the DSA is expected to drive policy in the United States as well as in Europe, says Mr. Engler, who studies the impact of data technologies on society.
“This is a classic example of the ‘Brussels effect’: the idea that when Europe regulates, it ends up having a global impact,” he adds. “Platforms don’t want to build different infrastructure based on whether the IP address is in Europe.”
And as academics are able to delve into Big Tech’s black boxes, the mitigating measures they suggest will not only be a good starting point for public debate, but could also provide inspiration for America, too.
During Ms. Haugen’s whistleblower testimony, U.S. lawmakers signaled that they could be open to the sorts of regulations that the DSA puts in place.
At a press conference following the congressional testimony last October, Sen. Richard Blumenthal, a Connecticut Democrat, marveled at the bipartisan agreement on the need for reform.
“If you closed your eyes, you wouldn’t even know if it was a Republican or a Democrat” speaking, he said. “Every part of the country has the harms that are inflicted by Facebook and Instagram.”
American and European regulators say this is true on both sides of the Atlantic.