Deals between companies and governments working together to automate acceptable content online are too common. Whilst content filtering is being proposed in EU copyright law, in other situations it’s all wrapped up in a closed door agreement. 

Ruth Coustick-Deal, writing for OpenMedia.org lays out the “shadow regulation” complementing the dubious legal propositions which are being drafted to curtail sharing.

Ruth Coustick-Deal: As the excitement over using automation and algorithms in tech to “disrupt” daily life grows, so too does governments’ desire to use it to solve social problems. They hope “automation” will disrupt piracy, online harassment, and even terrorism.

This is particularly true in the case of deploying automated bots for content moderation on the web. These autonomous programs are designed to detect certain categories of posts, and then take-down or block them without any human intervention.

In the last few weeks:

1)The UK Government have announced they have developed an algorithmic tool to remove ISIS presence from the web.
2) Copyright industries have called for similar programs to be installed that can remove un-approved creative content in the United States.
3) The European Commission has suggested that filters can be used to “proactively detect, identify, and remove” anything illegal – from comments sections on news sites to Facebook posts.
4) The Copyright in the Digital Single Market Directive, currently being debated by MEPs, is proposing using technical filters to block copyrighted content from being posted.

There’s a recklessness to all of these proposals – because so much of them involve sidestepping legal processes.

EFF coined the term “shadow regulation” for rules that are made outside of the legislative process, and that’s what is happening here. A cosy relationship between business and governments has developed that the public are being left outside of when it comes to limiting online speech.

Let’s take a look at Home Secretary Amber Rudd’s anti-terrorist propaganda tool. She claims it can identify “94% of IS propaganda with 99.995% accuracy.” Backed up by this amazingly bold claim, the UK Government want to make the tool available to be installed on countless platforms across the web (including major platforms like Vimeo and YouTube) which would be able to detect, and then remove such content. However, it’s likely to be in some form of unofficial “agreement”, rather than legislation that is scrutinised by parliament.

Similarly, in the European Commission’s communication on automating blocking illegal content, our friends at EDRi point out, “the draft reminds readers – twice – that the providers have “contractual freedom”, meaning that… safeguards will be purely optional.”

If these programs are installed without the necessary public debate, a legal framework, or political consensus – then who will they be accountable to? Who is going to be held responsible for censorship of the wrong content? Will it be the algorithm makers? Or the platforms that utilise them? How will people object to the changes?

Even when these ideas have been introduced through legal mechanisms they still give considerable powers to the platforms themselves. For example, the proposed copyright law we have been campaigning on through Save the Link prevents content from being posted that was simply identified by the media industry – not what is illegal.

The European Commission has suggested using police to tell the companies when a post, image, or video is illegal. There is no consideration of using courts – who elsewhere are the ones who make calls about justice. Instead we are installing systems that bypass the rule of law, with only vague gestures towards due process.

Governments are essentially ignoring their human rights obligations by putting private companies in charge. Whether via vague laws or back-room agreements, automated filtering is putting huge amounts of power in the hands of a few companies, who are getting to decide what restrictions are appropriate.

The truth is, the biggest platforms on the web already have unprecedented control over what gets published online. These platforms have become public spaces, where we go to communicate with one another. With these algorithms however, there is an insidious element of control that the owners of the platforms have over us. We should be trying to reduce the global power of these companies, rather than hand over the latest tools for automated censorship to use freely.

It’s not just the handing over of power that is problematic. Once something has been identified by police or by the online platform as “illegal,” governments argue that it should never be seen again.  What if that “illegal” content is being shown for criticism or news-worthy commentary? Should a witness to terrorism be censored for showing the situation to the world? Filters make mistakes. They cannot become our gods.

Content moderation is one of the trickiest subjects being debated by digital rights experts and academics at the moment. There have been many articles written, many conferences on the subject, and dozens of papers that have tried to consider how we can deal with the volumes of content on the web – and the horrific examples that surfact.

It is without a doubt that however content moderation happens online, there must be transparency. It must be specified in law what exactly gets blocked. And the right to free expression must be considered.

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.