European law on digital services: on a collision course with human rights

Last year, the EU introduced the Digital Services Act (DSA), an ambitious and thoughtful project to harness the power of big tech and give European internet users more control over their digital lives. It was an exciting moment, as the world’s largest trading bloc seemed poised to end a slew of ill-conceived tech regulations that were both ineffective and incompatible with fundamental human rights.

We were (cautious optimism, but we didn’t have any illusions: the same evil spirits who convinced the EU to mandate copyright filters that are overblocking, underperforming and preserving monopolies would also try to transform the DSA into one more excuse to subject the speech of Europeans to automated filtering.

We were right to worry.

The DSA is now moving full speed ahead on a collision course with even more algorithmic filters – the decidedly unintelligent ‘AIs’ that the 2019 Copyright Directive finally put in charge of digital expression of 500 million. people in the 27 European Member States.

Copyright filters are already making their way into national EU law as each country implements the 2019 Copyright Directive. Years of experience have shown us that automated filters are terrible at detecting copyright infringements, both underblocking (allowing infringement to pass through) and overblocking (removing content that does not infringe copyright) – and filters can be easily tricked by bad actors into blocking legitimate content, including (for example) members of the public who record their encounters with police officers.

But as bad as the copyright filters are, the filters the DSA might require are much, much worse.

The Filternet, made in Europe

Current proposals for DSA, recently approved by an influential European Parliament committee, would require online platforms to quickly remove potentially illegal content. A proposal would automatically make any “active platform” potentially responsible for its users’ communications. What is an active platform? Anyone who moderates, categorizes, promotes or otherwise processes the content of their users. Punishing services that moderate or categorize illegal content is absurd – these are two responsible ways of approaching illegal content.

These requirements give platforms the impossible task of identifying illegal content in real time, at speeds no human moderator could handle – with stiff penalties for misjudging. Inevitably, this means more automated filtering – something that platforms often brag about in public, even as their best engineers privately send notes to their bosses saying these systems don’t work at all.

Large platforms will overblock, removing content based on an algorithm’s quick and blunt determinations, while appeals for unfairly silenced people will go through a review process which, like the algorithm, will be opaque and arbitrary. . This review will also be slow: speech will be suppressed in an instant, but only restored after days, or weeks, or 2.5 years.

But at least the larger platforms would be able to comply with the DSA. It’s much worse for small services, run by startups, co-ops, nonprofits, and other organizations that want to support, not exploit, their users. These companies (“micro-enterprises” in EU jargon) will not be able to operate in Europe at all if they fail to raise the necessary funds to pay legal representatives and screening tools.

So the DSA is putting in place rules that allow a few American tech giants to control huge swathes of Europeans’ online discourse, because only they can afford it. In these American-run walled gardens, algorithms will monitor speech and delete it without warning, and regardless of whether the speakers are bullies engaged in the harassment – or survivors of bullying describing how they were harassed. .

It didn’t have to be this way

EU institutions have a long and admirable history of paying attention to human rights principles. Unfortunately, EU lawmakers who have revised the DSA since its introduction have set aside human rights concerns raised by EU experts and incorporated into EU law.

For example, the Electronic Commerce Directive, Europe’s foundational technology regulation, balances the need to remove illegal content with the need to assess content to determine if removal is warranted. Rather than setting a short and unreasonable deletion period, the E-Commerce Directive requires web hosts to remove content “promptly” after determining that it is genuinely illegal (this is called the standard. of “real knowledge”) and “in respect for the principle of freedom of expression.

This means that if you are running a service and discover illegal activity because a user notifies you, you should remove it within a reasonable time. It’s not great – like we have written, it should be up to the courts, and not the disgruntled users of the platform operators, to decide what is illegal and what is not. But as imperfect as it is, it is much better than the current proposals for the DSA.

These proposals would exacerbate the flaws of the e-commerce directive, following the catastrophic examples given by the German NetzDG and the French bill on online hate speech (a law so poorly constructed that it was quickly struck down). by the French Constitutional Council) and would set deadlines for deletion that prevent any serious examination. One proposal would require action within 72 hours, and another would require platforms to remove content within 24 hours or even within 30 minutes for live streamed content.

The E-Commerce Directive also prohibits “general surveillance obligations”, meaning that it prohibits European governments from ordering online services to spy on their users at any time. The short deadlines for removing content run counter to this ban and can only violate the rights to freedom of expression.

This espionage ban is complemented by the EU General Data Protection Regulation (GDPR) – a benchmark for global privacy regulations – which strictly regulates the circumstances in which a user may be subject to “automated decision-making” – that is to say, it effectively prohibits putting a user’s participation in online life at the mercy of an algorithm.

Overall, the ban on general surveillance and harmful and non-consensual automated decision-making is a way to protect the human rights of European internet users to live without constant surveillance and judgment.

Many DSA revision proposals break these two fundamental principles, calling for platforms to detect and restrict content that may be illegal or that has been previously identified as illegal, or that looks like known illegal content. This cannot be accomplished without subjecting everything every user posts to scrutiny.

This No Must be this way

The DSA can be recovered. It can be done to respect human rights and stay compliant with the E-Commerce Directive and GDPR. Content removal regimes can be balanced with rights to speech and privacy, with time frames that allow for careful assessment of the validity of removal requests. The DSA can be balanced to emphasize the importance of appeal systems for content removal as equal to the removal process itself, and platforms may be required to create and maintain robust and timely appeal systems.

The DSA may contain a ban on automated filtering obligations, complying with the GDPR and making a realistic assessment of the capabilities of “AI” systems based on independent experts, rather than the fancy hype of companies promising an algorithmic pie in the market. sky.

The DSA can recognize the importance of nurturing small platforms, not only out of fetishism of “competition” as a panacea for the ills of technology – but as a means by which users can exercise their technological self-determination, unite to function. or demand online social spaces that respect their standards, interests and dignity. This recognition would imply ensuring that all the obligations imposed by the DSA take into account the size and capacities of each actor. This is in line with the recommendations of the European Commission’s DSA impact assessment – a recommendation that has so far been largely ignored.

The EU and the rest of the world

European regulations are often used as a benchmark for developing global rules. GDPR created momentum that culminated with privacy laws such as the California CCPA, while NetzDG inspired even worse regulations and proposals in Australia, the UK and Canada.

Mistakes made by EU lawmakers in crafting the DSA will reverberate around the world, affecting vulnerable populations that were not considered in drafting and revising the DSA (until now).

The problems presented by Big Tech are real, they are urgent, and they are global. The world cannot afford disastrous EU technology regulations that sideline human rights in a quest for easy answers and false quick fixes.


Source link

Comments are closed.