The post The EU’s Copyright Proposal is Extremely Bad News for Everyone, Even (Especially!) Wikipedia appeared first on P2P Foundation.
]]>Cory Doctorow: The pending update to the EU Copyright Directive is coming up for a committee vote on June 20 or 21 and a parliamentary vote either in early July or late September. While the directive fixes some longstanding problems with EU rules, it creates much, much larger ones: problems so big that they threaten to wreck the Internet itself.
Under Article 13 of the proposal, sites that allow users to post text, sounds, code, still or moving images, or other copyrighted works for public consumption will have to filter all their users’ submissions against a database of copyrighted works. Sites will have to pay to license the technology to match submissions to the database, and to identify near matches as well as exact ones. Sites will be required to have a process to allow rightsholders to update this list with more copyrighted works.
Even under the best of circumstances, this presents huge problems. Algorithms that do content-matching are frankly terrible at it. The Made-in-the-USA version of this is YouTube’s Content ID system, which improperly flags legitimate works all the time, but still gets flack from entertainment companies for not doing more.
There are lots of legitimate reasons for Internet users to upload copyrighted works. You might upload a clip from a nightclub (or a protest, or a technical presentation) that includes some copyrighted music in the background. Or you might just be wearing a t-shirt with your favorite album cover in your Tinder profile. You might upload the cover of a book you’re selling on an online auction site, or you might want to post a photo of your sitting room in the rental listing for your flat, including the posters on the wall and the picture on the TV.
Wikipedians have even more specialised reasons to upload material: pictures of celebrities, photos taken at newsworthy events, and so on.
But the bots that Article 13 mandates will not be perfect. In fact, by design, they will be wildly imperfect.
Article 13 punishes any site that fails to block copyright infringement, but it won’t punish people who abuse the system. There are no penalties for falsely claiming copyright over someone else’s work, which means that someone could upload all of Wikipedia to a filter system (for instance, one of the many sites that incorporate Wikpedia’s content into their own databases) and then claim ownership over it on Twitter, Facebook and WordPress, and everyone else would be prevented from quoting Wikipedia on any of those services until they sorted out the false claims. It will be a lot easier to make these false claims that it will be to figure out which of the hundreds of millions of copyrighted claims are real and which ones are pranks or hoaxes or censorship attempts.
Article 13 also leaves you out in the cold when your own work is censored thanks to a malfunctioning copyright bot. Your only option when you get censored is to raise an objection with the platform and hope they see it your way—but if they fail to give real consideration to your petition, you have to go to court to plead your case.
Article 13 gets Wikipedia coming and going: not only does it create opportunities for unscrupulous or incompetent people to block the sharing of Wikipedia’s content beyond its bounds, it could also require Wikipedia to filter submissions to the encyclopedia and its surrounding projects, like Wikimedia Commons. The drafters of Article 13 have tried to carve Wikipedia out of the rule, but thanks to sloppy drafting, they have failed: the exemption is limited to “noncommercial activity”. Every file on Wikipedia is licensed for commercial use.
Then there’s the websites that Wikipedia relies on as references. The fragility and impermanence of links is already a serious problem for Wikipedia’s crucial footnotes, but after Article 13 becomes law, any information hosted in the EU might disappear—and links to US mirrors might become infringing—at any moment thanks to an overzealous copyright bot. For these reasons and many more, the Wikimedia Foundation has taken a public position condemning Article 13.
Speaking of references: the problems with the new copyright proposal don’t stop there. Under Article 11, each member state will get to create a new copyright in news. If it passes, in order to link to a news website, you will either have to do so in a way that satisfies the limitations and exceptions of all 28 laws, or you will have to get a license. This is fundamentally incompatible with any sort of wiki (obviously), much less Wikipedia.
It also means that the websites that Wikipedia relies on for its reference links may face licensing hurdles that would limit their ability to cite their own sources. In particular, news sites may seek to withhold linking licenses from critics who want to quote from them in order to analyze, correct and critique their articles, making it much harder for anyone else to figure out where the positions are in debates, especially years after the fact. This may not matter to people who only pay attention to news in the moment, but it’s a blow to projects that seek to present and preserve long-term records of noteworthy controversies. And since every member state will get to make its own rules for quotation and linking, Wikipedia posts will have to satisfy a patchwork of contradictory rules, some of which are already so severe that they’d ban any items in a “Further Reading” list unless the article directly referenced or criticized them.
The controversial measures in the new directive have been tried before. For example, link taxes were tried in Spain and Germany and they failed, and publishers don’t want them. Indeed, the only country to embrace this idea as workable is China, where mandatory copyright enforcement bots have become part of the national toolkit for controlling public discourse.
Articles 13 and 11 are poorly thought through, poorly drafted, unworkable—and dangerous. The collateral damage they will impose on every realm of public life can’t be overstated. The Internet, after all, is inextricably bound up in the daily lives of hundreds of millions of Europeans and an entire constellation of sites and services will be adversely affected by Article 13. Europe can’t afford to place education, employment, family life, creativity, entertainment, business, protest, politics, and a thousand other activities at the mercy of unaccountable algorithmic filters. If you’re a European concerned about these proposals, here’s a tool for contacting your MEP.
Photo by ccPixs.com
The post The EU’s Copyright Proposal is Extremely Bad News for Everyone, Even (Especially!) Wikipedia appeared first on P2P Foundation.
]]>The post The dangerous trend for automating censorship, and circumventing laws appeared first on P2P Foundation.
]]>Ruth Coustick-Deal, writing for OpenMedia.org lays out the “shadow regulation” complementing the dubious legal propositions which are being drafted to curtail sharing.
Ruth Coustick-Deal: As the excitement over using automation and algorithms in tech to “disrupt” daily life grows, so too does governments’ desire to use it to solve social problems. They hope “automation” will disrupt piracy, online harassment, and even terrorism.
This is particularly true in the case of deploying automated bots for content moderation on the web. These autonomous programs are designed to detect certain categories of posts, and then take-down or block them without any human intervention.
In the last few weeks:
1)The UK Government have announced they have developed an algorithmic tool to remove ISIS presence from the web.
2) Copyright industries have called for similar programs to be installed that can remove un-approved creative content in the United States.
3) The European Commission has suggested that filters can be used to “proactively detect, identify, and remove” anything illegal – from comments sections on news sites to Facebook posts.
4) The Copyright in the Digital Single Market Directive, currently being debated by MEPs, is proposing using technical filters to block copyrighted content from being posted.
There’s a recklessness to all of these proposals – because so much of them involve sidestepping legal processes.
EFF coined the term “shadow regulation” for rules that are made outside of the legislative process, and that’s what is happening here. A cosy relationship between business and governments has developed that the public are being left outside of when it comes to limiting online speech.
Let’s take a look at Home Secretary Amber Rudd’s anti-terrorist propaganda tool. She claims it can identify “94% of IS propaganda with 99.995% accuracy.” Backed up by this amazingly bold claim, the UK Government want to make the tool available to be installed on countless platforms across the web (including major platforms like Vimeo and YouTube) which would be able to detect, and then remove such content. However, it’s likely to be in some form of unofficial “agreement”, rather than legislation that is scrutinised by parliament.
Similarly, in the European Commission’s communication on automating blocking illegal content, our friends at EDRi point out, “the draft reminds readers – twice – that the providers have “contractual freedom”, meaning that… safeguards will be purely optional.”
If these programs are installed without the necessary public debate, a legal framework, or political consensus – then who will they be accountable to? Who is going to be held responsible for censorship of the wrong content? Will it be the algorithm makers? Or the platforms that utilise them? How will people object to the changes?
Even when these ideas have been introduced through legal mechanisms they still give considerable powers to the platforms themselves. For example, the proposed copyright law we have been campaigning on through Save the Link prevents content from being posted that was simply identified by the media industry – not what is illegal.
The European Commission has suggested using police to tell the companies when a post, image, or video is illegal. There is no consideration of using courts – who elsewhere are the ones who make calls about justice. Instead we are installing systems that bypass the rule of law, with only vague gestures towards due process.
Governments are essentially ignoring their human rights obligations by putting private companies in charge. Whether via vague laws or back-room agreements, automated filtering is putting huge amounts of power in the hands of a few companies, who are getting to decide what restrictions are appropriate.
The truth is, the biggest platforms on the web already have unprecedented control over what gets published online. These platforms have become public spaces, where we go to communicate with one another. With these algorithms however, there is an insidious element of control that the owners of the platforms have over us. We should be trying to reduce the global power of these companies, rather than hand over the latest tools for automated censorship to use freely.
It’s not just the handing over of power that is problematic. Once something has been identified by police or by the online platform as “illegal,” governments argue that it should never be seen again. What if that “illegal” content is being shown for criticism or news-worthy commentary? Should a witness to terrorism be censored for showing the situation to the world? Filters make mistakes. They cannot become our gods.
Content moderation is one of the trickiest subjects being debated by digital rights experts and academics at the moment. There have been many articles written, many conferences on the subject, and dozens of papers that have tried to consider how we can deal with the volumes of content on the web – and the horrific examples that surfact.
It is without a doubt that however content moderation happens online, there must be transparency. It must be specified in law what exactly gets blocked. And the right to free expression must be considered.
The post The dangerous trend for automating censorship, and circumventing laws appeared first on P2P Foundation.
]]>The post The catastrophic consequences of the non-Neutral Net will be very hard to spot, until it’s too late appeared first on P2P Foundation.
]]>Cory Doctorow: Stanford’s Futurity interviews Stanford Law expert Ryan Singel and International Studies expert Didi Kuo about the meaning of a non-Neutral internet, and the pair make an excellent and chilling point about the subtle, profound ways that Ajit Pai’s rollback of Net Neutrality rules to pre-2005 levels will distort and hobble the future internet.
The Pai rules allow ISPs to block rival services, but the real impact is likely to be much more subtle (and thus harder to spot in the moment and stop while there’s still time).
The ISPs are much more likely to approach the existing internet services like Netflix and demand money in return for a guarantee that their bits will reach you, the ISPs’ customers. The services, in turn, will simply raise their prices to make up the difference, resulting in you paying your ISP twice: once to connect to the internet, and a second time to subsidize the blackmail payments the internet services you make are now obliged to make to your ISP.
There’s another, even subtler and scarier distortion at work here. The ISPs want to create steady revenue streams from these services, and so the blackmail payments they demand will not exceed the services’ ability to pay. But they will limit who else can enter the market: Netflix and Youtube and the other established players were able to start because the capital needs of a video-on-demand service did not include a line item for blackmail to ISPs.
Future Netflix and Youtube challengers will have it different: their startup costs will include millions for hard-drives and marketing and bandwidth — and millions more for bribes to the telcos.
This is bad news for people who like watching videos, but it’s even worse news for people who make videos. With upstarts permanently, structurally frozen out of the market, today’s incumbent providers will become much like the telcos themselves: cozy, cooperative, and more interested in colluding than competing. Some of that will take the form of explicit conspiracies, but highly concentrated, stable industries can collude without conspiring: the executives tend to have worked at all the major firms at some point in their careers, know each other socially, understand one-another’s turf and territories, maintain out-of-work friendships and even intermarry. Without anyone having to draw up an agreement, these industries are perfectly capable of creating arrangements that are mutually beneficial and that freeze out any new entrants.
The online service providers understand that Pai’s rules mean that they’re just going to have to divert some profits to the telcos, but will not face an existential threat. They’ll always have a seat at the table: but the companies that don’t exist yet? They never get a seat at the table.
Here’s how to understand Net Neutrality: you get in a cab and ask it to take you to a Safeway, and you notice that it’s circling the block for no reason, delaying your arrival. “What gives?” you ask. The cabby explains that Whole Foods has paid for “premium carriage” by the cab firm, and so it gets “fast lane” service — which means that everyone else gets the slow lane. The cab driver explains that running a taxi is expensive and hard work, and that choosing one grocer over another helps the cab company fund its maintenance, operations and upgrades.
That’s nice for the cab company, but you didn’t get into the cab to be taken to the most profitable destination for the cab company — you got in to be taken to the place you wanted to go.
The cabbie says, “Hell, why are you being so particular? Safeway and Whole Foods aren’t that different. Besides, Safeway makes decisions about what food you buy: they don’t carry every possible grocery item, and they arrange their groceries in the way that suits them, not you. Why do you get pissed off when the cab company steers you toward the stores of its choosing, but you’re happy to shop at a store that sends you to the items of its choosing?”
The answer, of course, is that it’s none of the taxi’s business. Maybe Safeway is gouging its suppliers for endcaps, and maybe it isn’t, but that’s between you and Safeway. You might choose to tackle that yourself, or it might not matter to you. It’s not the cab company’s job to tell you where to go: it’s their job to go where you tell them.
Singel: The effects we’re likely to see will affect users secondarily. Verizon, for instance, can now go to a Yelp or a Netflix and say, “You need to pay us X amount of money per month, so your content loads for Verizon subscribers.” And there’s no other way for Netflix to get to Verizon subscribers except through Verizon, so they’ll be forced to pay. That cost will then get pushed onto people that subscribe to Netflix.
So what users do online will become more expensive, we’ll see fewer free things, and thus the internet will become more consolidated. Websites, blogs, and startups that don’t have the money to pay won’t survive. I like to think of it as the internet is going to get more boring.
Kuo: The worst-case scenario would be if ISPs blocked access to websites based on their content, but that scenario seems unlikely outside of a few limited applications, such as file-sharing. The ISPs have an interest in being apolitical and letting the internet remain “open,” at least in the ways that will be most apparent to consumers.
More likely, the rollback of net neutrality will have consequences for start-ups and companies with a web presence. It will allow ISPs to charge companies more to reach consumers. While large technology platforms can afford to pay for fast access, start-ups and competitors will have a far more difficult time.
What could net neutrality’s end mean for you? [Futurity]
(via Naked Capitalism)
The post The catastrophic consequences of the non-Neutral Net will be very hard to spot, until it’s too late appeared first on P2P Foundation.
]]>The post Pull the Plug on Internet Spying Programs appeared first on P2P Foundation.
]]>An important campaign by our friends at the EFF. Read this article for additional information.
Electronic Frontier Foundation: Many were shocked to learn that the U.S. indiscriminately vacuums up the communications of millions of innocent people – both around the world and at home – through surveillance programs under Section 702, originally enacted by the FISA Amendments Act. This warrantless, suspicionless surveillance violates established privacy protections, including the Fourth Amendment.
The U.S. government uses Section 702 to justify the collection of the communications of innocent people overseas and in the United States by tapping into the cables that carry domestic and international Internet communications through what’s known as Upstream surveillance. The government also forces major U.S. tech companies to turn over private communications stored on their servers through a program often referred to as PRISM. While the programs under Section 702 are theoretically aimed at foreigners outside the United States, they constantly collect Americans’ communications with no meaningful oversight from the courts.
These programs are a gross violation of Americans’ constitutional rights. Communicating with anyone who is potentially located abroad does not invalidate your Fourth Amendment protections.
Tell your representatives in Congress that it is time to let the sun set on this mass Internet spying.
The post Pull the Plug on Internet Spying Programs appeared first on P2P Foundation.
]]>The post W3C abandons consensus, standardizes DRM, EFF resigns appeared first on P2P Foundation.
]]>Cory Doctorow: In July, the Director of the World Wide Web Consortium overruled dozens of members’ objections to publishing a DRM standard without a compromise to protect accessibility, security research, archiving, and competition.
EFF appealed the decision, the first-ever appeal in W3C history, which concluded last week with a deeply divided membership. 58.4% of the group voted to go on with publication, and the W3C did so today, an unprecedented move in a body that has always operated on consensus and compromise. In their public statements about the standard, the W3C executive repeatedly said that they didn’t think the DRM advocates would be willing to compromise, and in the absence of such willingness, the exec have given them everything they demanded.
This is a bad day for the W3C: it’s the day it publishes a standard designed to control, rather than empower, web users. That standard that was explicitly published without any protections — even the most minimal compromise was rejected without discussion, an intransigence that the W3C leadership tacitly approved. It’s the day that the W3C changed its process to reward stonewalling over compromise, provided those doing the stonewalling are the biggest corporations in the consortium.
EFF no longer believes that the W3C process is suited to defending the open web. We have resigned from the Consortium, effective today. Below is our resignation letter:
Dear Jeff, Tim, and colleagues,
In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing “Encrypted Media Extensions,” an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties.
When it became clear, following our formal objection, that the W3C’s largest corporate members and leadership were wedded to this project despite strong discontent from within the W3C membership and staff, their most important partners, and other supporters of the open Web, we proposed a compromise. We agreed to stand down regarding the EME standard, provided that the W3C extend its existing IPR policies to deter members from using DRM laws in connection with the EME (such as Section 1201 of the US Digital Millennium Copyright Act or European national implementations of Article 6 of the EUCD) except in combination with another cause of action.
This covenant would allow the W3C’s large corporate members to enforce their copyrights. Indeed, it kept intact every legal right to which entertainment companies, DRM vendors, and their business partners can otherwise lay claim. The compromise merely restricted their ability to use the W3C’s DRM to shut down legitimate activities, like research and modifications, that required circumvention of DRM. It would signal to the world that the W3C wanted to make a difference in how DRM was enforced: that it would use its authority to draw a line between the acceptability of DRM as an optional technology, as opposed to an excuse to undermine legitimate research and innovation.
More directly, such a covenant would have helped protect the key stakeholders, present and future, who both depend on the openness of the Web, and who actively work to protect its safety and universality. It would offer some legal clarity for those who bypass DRM to engage in security research to find defects that would endanger billions of web users; or who automate the creation of enhanced, accessible video for people with disabilities; or who archive the Web for posterity. It would help protect new market entrants intent on creating competitive, innovative products, unimagined by the vendors locking down web video.
Despite the support of W3C members from many sectors, the leadership of the W3C rejected this compromise. The W3C leadership countered with proposals — like the chartering of a nonbinding discussion group on the policy questions that was not scheduled to report in until long after the EME ship had sailed — that would have still left researchers, governments, archives, security experts unprotected.
The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew — and the large corporate members continued to reject any meaningful compromise — the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate. In essence, a core of EME proponents was able to impose its will on the Consortium, over the wishes of a sizeable group of objectors — and every person who uses the web. The Director decided to personally override every single objection raised by the members, articulating several benefits that EME offered over the DRM that HTML5 had made impossible.
But those very benefits (such as improvements to accessibility and privacy) depend on the public being able to exercise rights they lose under DRM law — which meant that without the compromise the Director was overriding, none of those benefits could be realized, either. That rejection prompted the first appeal against the Director in W3C history.
In our campaigning on this issue, we have spoken to many, many members’ representatives who privately confided their belief that the EME was a terrible idea (generally they used stronger language) and their sincere desire that their employer wasn’t on the wrong side of this issue. This is unsurprising. You have to search long and hard to find an independent technologist who believes that DRM is possible, let alone a good idea. Yet, somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool’s errand.
We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they’ll be able to ensure no one ever subjects them to the same innovative pressures.
So we’ll keep fighting to fight to keep the web free and open. We’ll keep suing the US government to overturn the laws that make DRM so toxic, and we’ll keep bringing that fight to the world’s legislatures that are being misled by the US Trade Representative to instigate local equivalents to America’s legal mistakes.
We will renew our work to battle the media companies that fail to adapt videos for accessibility purposes, even though the W3C squandered the perfect moment to exact a promise to protect those who are doing that work for them.
We will defend those who are put in harm’s way for blowing the whistle on defects in EME implementations.
It is a tragedy that we will be doing that without our friends at the W3C, and with the world believing that the pioneers and creators of the web no longer care about these matters.
Effective today, EFF is resigning from the W3C.
Thank you,
Cory Doctorow
Advisory Committee Representative to the W3C for the Electronic Frontier Foundation
The post W3C abandons consensus, standardizes DRM, EFF resigns appeared first on P2P Foundation.
]]>The post Stop #CensorshipMachine: EU copyright threatens our freedoms appeared first on P2P Foundation.
]]>The post Stop #CensorshipMachine: EU copyright threatens our freedoms appeared first on P2P Foundation.
]]>The post Why “Reforming” Copyright Will Kill It appeared first on P2P Foundation.
]]>This would amount to obliterating the “DRM Curtain” model of capitalism in the information field — a system of economic extraction and class rule comparable to the system of bureaucratic privilege in the old Soviet Union in its reliance on suppressing the free flow of information. It would put an end to the centerpieces of copyright culture today — DMCA takedowns, “three strikes” laws cutting off ISP services to illegal downloaders, and domain seizures of file-sharing sites.
But as Cory Doctorow points out (Courtney Nash, “Cory Doctorow on legally disabling DRM (for good),” O’Reilly Media, Aug. 17), this won’t just destroy the draconian legal regime in what’s conventionally regarded as information industries — music, movies, software, etc. — but increasingly prevalent use of copyrighted software to enforce proprietary designs and business models for physical goods. This includes limiting appliances to proprietary replacement parts and accessories (like printer cartridges), by DRMing the appliances to reject replacement parts that don’t pass an “integrity check” that verifies they come from the manufacturer.
“This is a live issue in a lot of domains. It’s in insulin pumps, it’s in voting machines, it’s in tractors…. Several security researchers filed a brief saying they had discovered grave defects in products as varied as voting machines, insulin pumps and cars, and they were told by their counsel that they couldn’t disclose because, in so doing, they would reveal information that might help someone bypass DRM, and thus would face felony prosecution and civil lawsuits.”
In short, eliminating the legal enforcement of DRM — by criminal, mind, not civil law — would effectively destroy all business models based on proprietary digital information, both in the “information industries” as such and in manufacturing. And these are, mind you, the primary source of profit in today’s global corporate economy.
Interestingly enough, thinkers like Doctorow and Lawrence Lessig say they’re not against copyright — they just want to reform it and make it more reasonable. But from what we’ve seen above, it’s absolutely dependent on police state measures like the DMCA and the “intellectual property” provisions in “Free Trade” Agreements like TPP for its survival in any remotely recognizable form.
That’s not to say copyright would cease to exist or be enforceable in any form. But what would be left of it, absent DMCA takedowns and criminal prosecution for file-sharing, would be the quaint world of copyright in the 1970s. The main material effect of copyright law would be to prevent the mass printing of unauthorized versions of copyrighted books, or of hard copies of recordings for sale in stores. And that would be far less significant for readers and listeners than it was back in the ’70s, when the inconvenient or poor quality output of photocopiers and casette recorders was the main threat to the publishing and record industries. Back in those days, the relative significance of copyright as a mechanism for rent extraction was relatively marginal, compared to capitalism’s other sources of profit.
The model of proprietary digital capitalism we’re familiar with — the central model of global corporate rent extraction — is absolutely dependent on police state measures like criminalizing the circumvention of DRM, the takedown (without due process of any kind) of allegedly infringing content online, and government seizure of Internet domains and web hosting servers without due process. Without them, it would simply collapse.
But fortunately, that model of capitalism is doomed regardless of the outcome of EFF’s lawsuit (and I wish it well!). Even as it is, circumvention technologies have advanced so rapidly that DRM-cracked versions of new movies and songs typically show up on torrent sites the same day they’re released, and Millennials accept it file-sharing as a simple fact of life. This culture of circumvention is now spreading into academic publishing with SciHub. How long before it spreads to proprietary spare parts and diagnostic software?
As always, as Center for a Stateless Society comrade Charles Johnson says (“Counter-Economic optimism,” Rad Geek People’s Daily, Feb. 7, 2009), an ounce of circumvention is worth a pound of lobbying.
The post Why “Reforming” Copyright Will Kill It appeared first on P2P Foundation.
]]>