As the EU rewrites its digital policy, tech companies ask for legal protections to moderate hate speech and illegal content more aggressively.

Pádraig Belton, Contributor, Light Reading

October 26, 2020

5 Min Read
Tech asks EU for hate-speech moderation protection

Tech firms have asked the European Union to protect them against legal liability for more actively taking down illegal content and hate speech.

At issue is a current EU rule which protects tech companies from legal liability for content users have posted on their platforms, until they have "actual knowledge" it is present – such as from another user flagging it as illegal.

The platforms then have an obligation to take the content down quickly.

What worries tech firms is that by engaging in more active content moderation – through means such as stepped-up algorithms – they might acquire "actual knowledge."

The tech firms are proposing to replace this with "something novel" which would give them "a legal safeguard to better tackle illegal content online," says Siada El Ramly, director general of the European Digital Media Association (EDiMA).

Figure 1: Speak up: Tech giants including Mark Zuckerberg's Facebook are lobbying the EU. (Source: JD Lasica on Flickr CC2.0) Speak up: Tech giants including Mark Zuckerberg's Facebook are lobbying the EU.
(Source: JD Lasica on Flickr CC2.0)

Tech companies' current protections come from the limited liability regime established by the EU's 2000 e-Commerce directive.

The rationale of only obliging tech firms to remove illegal content when they have "actual knowledge" of it is to prevent companies from being obliged to police all their users' content for illegalities.

Such an obligation would make tech platforms err on the side of blocking more content than necessary, to keep from being sued, and would "inevitably infringe on the fundamental rights to speech and privacy", says EDiMA's paper.

And making tech platforms police their content also could "create insurmountable operational barriers for many – especially smaller – service providers," it says.

Tech companies have legal protection in the US for their attempts to moderate content more aggressively.

In US law, there is a Good Samaritan Principle included in Section 230(c) of the Communications Decency Act, passed in 1996.

This is a broad-brush principle which casts platforms' decisions to moderate content as a matter of their own freedom of speech and says their actions to reduce illegal activity online don't impact on their limited liability.

The Electronic Freedom Foundation (EFF) calls this "one of the most valuable tools for protecting freedom of expression and innovation on the Internet."

This is somewhat surprising, says the EFF, since "the original purpose of the legislation was to restrict free speech on the Internet".

Be a good samaritan. Mostly.

As well as illegal content, the US Good Samaritan Principle also protects tech companies when they remove content that is not illegal, but harmful.

But a different solution is necessary under EU law, says El Ramly.

For one, the US approach relies on distinctly American legal building blocks like the First Amendment.

More importantly, she says, the US approach has limited room for appeal for users who want to question a provider's decision about content moderation.

That is, if content providers are exercising their own rights to free speech when they moderate, it's hard to stop them.

"The EU approach to the freedom of expression is different to that of the US so our approach to moderating content online must be different also," El Ramly says.

A better solution than the current US policy would instead give the final decision on any internal appeal process to a human, with an ultimate right to judicial redress to determine the legality of online content and activity, she says.

"We want users to have a meaningful way to get an explanation regarding why their content was removed and be able to easily appeal content removals," says EDiMA in a tweet.

So an accountability mechanism to ensure providers' actions are proportionate, transparent and effective would protect against "overaction" by service providers and protect the European principle of freedom of expression, it says in another tweet.

After coronavirus, debugging the European economy

The European Commission is preparing a revision to its digital policy.

Earlier this month, European heads of government asked the Commission to come up with a "Digital Compass" strategy by March 2021, which would set out the EU's digital ambitions for 2030.

Want to know more about security? Check out our dedicated security channel here on Light Reading.

Under this policy, at least 20% of EU funds from the Coronavirus Recovery and Resilience Facility will be made available for "digital transition" efforts, including ones aimed at small businesses.

EDiMA hopes to have its content moderation recommendations included in this digital policy rewrite.

The European trade association representing online platforms, EDiMA represents Amazon, Apple, eBay, Expedia, Facebook, Google, LinkedIn, Microsoft, PayPal, Twitter and AirBnB along with several other tech and new media companies.

Related posts:

Pádraig Belton, contributing editor special to, Light Reading

Read more about:

Europe

About the Author(s)

Pádraig Belton

Contributor, Light Reading

Contributor, Light Reading

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like