European regulation of video-sharing platforms: what’s new, and will it work?

The European Parliament recently adopted the revised text of the Audiovisual Media Services Directive, which governs the EU-wide coordination of national laws on all audiovisual media. Lubos KuklisDirector General of the Slovak Media Authority (CBR) and Chairman of the European Regulators Group for Audiovisual Media Services (ERGA), discusses what’s new in the AVMS Directive review and what it means for the future of freedom of speech.

The new regulatory framework

The revision of the European Audiovisual Media Services Directive (SMASD), adopted earlier this month after a legislative journey of more than two years, introduces (among other things) an entirely new element in media regulation: the regulatory framework for video sharing platforms (VSP). Although built on many concepts from the old regulation of television or, later, video-on-demand services, this set of rules creates a new model of regulation.

And rightly so, because video-sharing platforms (think YouTube, but also video content on social media like Facebook), where users do a lot of the actual work, don’t have the same strong editorial element. than the media traditionally covered by the directive, i.e. television and video on demand. The provisions governing older types of media are relatively straightforward in imposing obligations on media providers. But the sheer scale of VSP operations, in combination with an active user who is the primary source behind creating, uploading, mixing and commenting on content, calls for a different approach.

The main difference lies not only in the framing of the provisions relating to this new type of media, with more vague formulations leaving some leeway to media providers on the precise way of implementing the measures requested, but also in the distribution of powers between regulators and the providers themselves.

Under the new VSP regulatory framework, it is the VSP provider who is responsible for creating the terms and conditions of use of their platforms, and the mechanisms by which the user can lodge a complaint or the system of rating/reporting content. The regulator, on the other hand, has a more detached role, compared to older types of media regulation, in which it primarily assesses whether the mechanisms established by the provider comply with the law.

This approach raises fears that we are simply outsourcing regulation to private companies. Such a delegation of power could certainly conflict with the fundamental rights of the users of the platforms in question, and therefore with the fundamental principles of constitutional democracies.

A recent and thoughtful contribution to the debate in this line of thinking is this post by Joan Barata of Stanford University. While I agree with many points in her text, I also think that Joan’s concerns may not be entirely justified. And because I also think we need an intense discussion about this, I wrote this post in part to respond to Joan’s points.

Regulation vs Free Speech

Indeed, the delegation of the exercise of regulatory powers to a private entity could be very detrimental to freedom of expression and of the media. But it must also be recognized that the platforms most used are providers of digital public spaces where much of society’s conversation takes place, and in this space the platforms are already regulators in their own right, but almost without no transparency, and therefore no accountability. And since the results of this arrangement are far from satisfactory, it is time to find a workable solution.

Again, leaving regulatory powers to a private entity without any public scrutiny is clearly not the right solution. But that’s not what, in my opinion, the new AVMSD does either. Indeed, it obliges VSP providers to put in place protective measures. It requires them to put certain provisions in the terms and conditions, to create a content classification system and also to establish procedures for handling complaints. Many of these tasks were, and still are, part of the remit of media regulators when dealing with radio, television and video-on-demand. But in the context of online platforms, this is simply not feasible.

There is therefore clearly no other way to control content moderation activities, given the scale of the operations in question (but there are also other considerations: the technological infrastructure, the active role and a high number of users), then to co-regulate the environment with the platforms themselves.

The risks that Joan and others point out are that regulatory oversight will only be there to make the rules and their enforcement tougher, and that more content takedowns would be interpreted as more effective regulation.

This concern is not entirely misplaced, of course. The last time the European Commission pushed providers to do something about hate speech content, it did indeed appear to celebrate more takedowns as an unequivocal success. But without transparency and information about individual cases, you surely can’t tell whether takedowns are really improving the media environment, or whether providers are simply trying to get rid of controversial content – or, indeed, content that someone one happens to be complaining.

But monitoring doesn’t have to be like that. Certainly not, if it will not singularly push the platforms to clear their services, but will also protect the rights of its downloading users. If the parent regulator (and of course the users) know about user rights and can see what procedures providers have in place to deal with user content, then assessing the effectiveness of regulation can have a whole different senses. Not only can this cause the provider to get rid of bad content, but it can also protect users from overly strict or arbitrary enforcement of the rules. It can, in other words, create an environment in which users are not only protected from harmful content by other users, but also from authoritarian or arbitrary intrusions by the platform itself.

And all of this is legally provided for in the AVMSD. It establishes the transparency of procedures between users and platforms. It sets basic standards for user complaints and establishes their right to be informed about how their complaint is being handled. It provides for an independent regulator to assess whether these mechanisms put in place by the supplier are appropriate. It gives the user the option to seek out-of-court redress if they feel they have been mistreated by the platform. Finally, each EU Member State must ensure that users can defend their rights in court.

So I think the legal basis for the protection but also for the fair treatment of users is in the directive. It is now up to the Member States to implement it in such a way that this potential is realized (and the European Commission has an important role in this process).

And that’s also why I think the debate we’re having now is so important. So, thank you, Joan, for her initiation.

This article gives the author’s point of view and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

Shirley K. Rosa