European Commission president-elect Ursula Von der Leyen speaks to the media as she arrives for the second day of the European Union leaders summit dominated by Brexit, in Brussels, Belgium October 18, 2019. REUTERS/Piroschka van de Wouw
Words can come back to haunt you—that’s the uncomfortable truth gradually emerging for social media and other digital platform companies in Europe that disseminate user-generated content. As concern skyrockets over political disinformation, hate speech, and terrorist incitement on the Internet, legislators across Europe are scrambling for regulatory answers.
Announcing…the Digital Services Act
The European Commission now has signaled its readiness to join the fray. On July 16, Commission President-elect Ursula von der Leyen announced, as part of the political guidelines that set the legislative course for the next five years in Brussels, the goal of adopting a “Digital Services Act.” It would be the next major step—and potentially one of the most consequential—in the EU’s ongoing Digital Single Market (DSM) initiative, which has been gradually remaking the bloc’s laws governing electronic commerce, copyright, and privacy.
Von der Leyen said the Digital Services Act would “upgrade
our liability and safety rules for digital platforms, services, and products.” Its
broad goal, Commissioner for the Security Union Julian King added during a
recent visit to Washington, would be to modernize the treatment of issues
relating to online harm. A leaked initial internal concept paper prepared by
the Directorate General for Communications Networks, Content, and Technology
(DG CONNECT) confirmed that rethinking liability rules would be a principal
goal.
But while the potential legislation already appears to have a name, its regulatory approach and precise content is still far from determined. The Commission may well require a year or more to develop its legislative proposal. First, it will prepare a consultation document, publicly laying out regulatory options of varying degrees of rigor and invite comment from companies and other stakeholders.
EU platform liability rules today
EU rules on platform liability for user-generated content
currently are set by the E-Commerce
Directive, a twenty-year-old framework law that contains few detailed rules
tailored for today’s Internet. Under
Article 14 of the E-Commerce Directive, a company such as Facebook is not
liable for information posted by social media users if it has no knowledge of
the illegal nature of the information, or if the platform acts expeditiously to
remove, or disable access to, such information of which it has become aware. However,
service providers do voluntarily police illegal content such as child
pornography, removing it in line with a non-binding EU code of conduct.
Similarly, platforms across the EU engage in self-regulation of hate speech
(except in Germany where the NetzDG law imposes
take-down obligations).
Early on, the Commission will have to decide whether to undertake a wholesale revision of the Digital Services Act, or instead to concentrate on the subset of liability issues relating to digital platforms. The latter course could prove more politically appealing and achievable, reducing the risk of the EU legislature getting bogged down in the wide array of legal and policy issues addressed in the E-Commerce Directive.
The Atlantic Council’s Transatlantic Digital Marketplace Initiative seeks to foster greater US-EU understanding and collaboration on digital policy matters and makes recommendations for building cooperation and ameliorating differences in this fast-growing area of the transatlantic economy.
Read More
From self-policing to mandatory content removal
A Digital Services Act likely would empower governmental
authorities to order take-down of these types of content, although it might
well treat hate speech in a more nuanced fashion due to varying member state
constitutional protections. Earlier this year, the EU’s revised Copyright
Directive enacted a similar change in the rules governing platforms’
oversight of copyrighted content, compelling the removal from their sites of
copyright-infringing material and urging the use of automated filtering. The proposed
regulation on terrorism content online, currently advancing through the EU legislative process, takes a
comparable approach, requiring platforms to search for offending content and
prevent its dissemination.
EU courts have also joined the move towards controlling the dissemination of problematic user-generated content via social media. This year, the European Court of Justice (ECJ) took up a case involving a Facebook post endorsing an online Austrian news report that had criticized the position taken by a prominent Austrian Green politician, Eva Glawischnig-Piesczek, on refugee policy. An Austrian court previously had found the original news item to defame the politician, but it turned to the ECJ to determine whether the E-Commerce Directive nevertheless would safeguard its further dissemination via the user’s Facebook account. The ECJ earlier this month authorized the Austrian court to require Facebook to block access to the user post linking to defamatory content, not only within the EU but on a global basis.
The similar US approach to platform liability
The United States historically has taken a similar approach to the
EU’s E-Commerce Directive in establishing for internet platforms a safe harbor
from intermediate liability for user-generated content. Section 230 of the
Communications Decency Act of 1996, often
referred to as “the twenty-six words that created the Internet,” was
long sacrosanct in Washington, but lately has been challenged in ways similar
to Europe. Notably, Section 230 was amended in 2018 to require the removal of
material violating federal or state sex trafficking laws, following a federal
court decision that had created doubt whether the existing law would allow such
a step. This year, conservative members of Congress have sharply criticized
Section 230’s protections as allegedly enabling internet platforms to host user
content in a politically-biased manner.
Despite mounting Congressional pressure to further
change Section 230, the US executive branch continues to press foreign
governments to adopt similar intermediary liability protections for internet
platforms. Both the United States–Mexico–Canada Free Trade Agreement (USMCA)
and a newly-inked bilateral trade pact with Japan contain digital commerce
provisions committing those governments to enact counterpart legal provisions.
Recent EU trade agreements also have begun to incorporate digital commerce provisions but are notably silent on the question of intermediate liability protections for platforms. The Commission is unlikely to change this posture at a time when it will be embarking on a substantial revision of the E-Commerce Directive. Thus if the United States and EU undertake negotiations on a limited trade agreement including digital trade provisions—as is periodically rumored in Washington and Brussels—the EU is very unlikely to agree to a US proposal on intermediary liability along the lines of the USMCA or US-Japan trade agreement.
Transatlantic dialogue on tackling online harms?
Nonetheless, the United States and the EU could
benefit from a broad policy dialogue about the merits and challenges of their
respective legislative provisions conferring content immunity for platforms. After
all, the same social networks have transformed the dissemination of information
on both sides of the Atlantic in similar ways. And the economic stakes of a
changed European liability regime for US platform providers are substantial.
US executive branch agencies and the European Commission could lead such discussions, and it might be useful to bring in their respective legislators as well. Dialogue would only be productive, of course, if pursued before legislative intentions are set in concrete. Policy makers in Washington and Brussels often lament the lack of prior transatlantic consultation on legislative reforms that have consequences for the other jurisdiction. Jointly reflecting on the best means for combatting online harms is a tailor-made opportunity to reverse that dismal history.
Kenneth Propp is a nonresident senior fellow in the Atlantic Council’s Future Europe Initiative.
Further reading
Wed, Oct 2, 2019
A quick look at how the new European Commission will line up on digital policy.
New Atlanticist
by
Frances Burwell
Thu, Sep 12, 2019
In this issue brief, Kenneth Propp, non-resident senior fellow at the Atlantic Council’s Future Europe Initiative, examines the existing transatlantic data-transfer mechanisms, the landmark court case that could invalidate them, and the divergent US and EU approaches to data transfers across borders and privacy standards.
Issue Brief
by
Kenneth Propp
Mon, May 20, 2019
The United States’ snubbing of the document represents a retreat from previous counterterrorism pledges. It also reveals a dangerous divide between the White House and US allies regarding the growing threat of white ethno-nationalist extremism.
New Atlanticist
by
Emerson T. Brooking
As concern skyrockets over political disinformation, hate speech, and terrorist incitement on the Internet, legislators across Europe are scrambling for regulatory answers.
|