“The Internet will not automatically preserve—never mind improve—the health of democratic politics. Yes, a wired future might look good for democracy if some of the social functions currently performed by traditional media are taken up by new Internet projects. But that outcome needs to be demonstrated—perhaps constructively aimed at—rather than assumed. For populists such as Shirky, the need for considered political commitment does not even merit discussion. The triathlon must go on, even if the athletes become brainwashed and bigoted.”
Book: Cognitive Surplus: Creativity and Generosity in a Connected Age. Clay Shirky. The Penguin Press, 2010
In the Boston Review, Evgeny Morozov has a rather devastating review of Shirky’s new book.
But before excerpting these remarks, a reminder of what the Cognitive Surplus means:
the unused potential of the minds of 6,7 billion people. Social media is unlocking this potential. Technology allows us to be creative and productive instead of consumptive
Clay Shirky explains:
“So how big is that surplus? So if you take Wikipedia as a kind of unit, all of Wikipedia, the whole project–every page, every edit, every talk page, every line of code, in every language that Wikipedia exists in–that represents something like the cumulation of 100 million hours of human thought. I worked this out with Martin Wattenberg at IBM; it’s a back-of-the-envelope calculation, but it’s the right order of magnitude, about 100 million hours of thought.
And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that’s 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 100 million hours every weekend, just watching the ads. This is a pretty big surplus. People asking, “Where do they find the time?” when they’re looking at things like Wikipedia don’t understand how tiny that entire project is, as a carve-out of this asset that’s finally being dragged into what Tim calls an architecture of participation.
Now, the interesting thing about a surplus like that is that society doesn’t know what to do with it at first–hence the gin, hence the sitcoms. Because if people knew what to do with a surplus with reference to the existing social institutions, then it wouldn’t be a surplus, would it? It’s precisely when no one has any idea how to deploy something that people have to start experimenting with it, in order for the surplus to get integrated, and the course of that integration can transform society
And this is the other thing about the size of the cognitive surplus we’re talking about. It’s so large that even a small change could have huge ramifications. Let’s say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That’s about five times the size of the annual U.S. consumption. One per cent of that is 100 Wikipedia projects per year worth of participation.
Here’s something four-year-olds know: A screen that ships without a mouse ships broken. Here’s something four-year-olds know: Media that’s targeted at you but doesn’t include you may not be worth sitting still for. Those are things that make me believe that this is a one-way change. Because four year olds, the people who are soaking most deeply in the current environment, who won’t have to go through the trauma that I have to go through of trying to unlearn a childhood spent watching Gilligan’s Island, they just assume that media includes consuming, producing and sharing.
It’s also become my motto, when people ask me what we’re doing–and when I say “we” I mean the larger society trying to figure out how to deploy this cognitive surplus, but I also mean we, especially, the people in this room, the people who are working hammer and tongs at figuring out the next good idea. From now on, that’s what I’m going to tell them: We’re looking for the mouse. We’re going to look at every place that a reader or a listener or a viewer or a user has been locked out, has been served up passive or a fixed or a canned experience, and ask ourselves, “If we carve out a little bit of the cognitive surplus and deploy it here, could we make a good thing happen?” And I’m betting the answer is yes.”
And here excerpts from Evgeny Morozov’s critique:
“The main argument of Cognitive Surplus rests on a striking analogy. Just as gin helped the British to smooth out the brutal consequences of the Industrial Revolution, the Internet is helping us to deal more constructively with the abundance of free time generated by modern economies.
Shirky argues that free time became a problem after the end of WWII, as Western economies grew more automated and more prosperous. Heavy consumption of television provided an initial solution. Gin, that “critical lubricant that eased our transition from one kind of society to another,” gave way to the sitcom.
More recently TV viewing has given way to the Internet. Shirky argues that much of today’s online culture—including videos of toilet-flushing cats and Wikipedia editors wasting 19,000 (!) words on an argument about whether the neologism “malamanteau” belongs on the site—is much better than television. Better because, while sitcoms give us couch potatoes, the Internet nudges us toward creative work.
That said, Cognitive Surplus is not a celebration of digital creativity along the lines of Richard Sennett’s The Craftsman or Lawrence Lessig’s “remix culture.” Shirky instead focuses on the sharing aspect of online creation: we are, he asserts, by nature social, so the Internet, unlike television, lets us be who we really are. “No one would create a lolcat to keep for themselves,” Shirky argues, referring to the bête noire of Internet-bashers, the humorous photos of cats spiced up with funny and provocative captions. “Cognitive surplus” is what results when we multiply our constantly expanding free time by the tremendous power of the Internet to enable us do more with less, and to do it together with others.
According to Clay Shirky, ‘the real gap is between doing nothing and doing something, and someone making lolcats has bridged that gap.’
Arguments about infinite digital opportunities for doing good have been a commonplace of cyber-utopians since the mid-1990s. But Shirky is a populist, not a utopian. His only benchmark of success is the relative standing of “us” against dominant institutions and, in particular, against the mind-numbing, brain-damaging, creativity-suppressing beast that is the traditional media.
For Shirky, doing anything online beats the passivity nurtured by the traditional media. The argument is beautiful in its simplicity: “the real gap is between doing nothing and doing something, and someone making lolcats has bridged that gap,” for “the stupidest possible creative act is still a creative act.”
To drive that point home, he proposes a thought experiment: while Americans spend 200 billion hours a year watching television, the whole of humanity spent something like 100 million hours to create Wikipedia (or, at least, its 2008 version). Thus, even a tiny change in our TV watching habits can lead to significant social gains. Not every Internet project would become a Wikipedia—lolcats are still currency of the day—but Shirky urges us to keep trying. Short-selling the Internet may prevent us stumbling upon a technology as revolutionary as the printing press.
Shirky’s strong suit is not hard data, but clever anecdotes. He draws on a vast array of provocative and memorable stories—from anime communities in Japan to skaters in Santa Monica, garbage-collectors in Pakistan, and car-poolers in Canada—that help to bolster his thesis. But the anecdotes don’t make up for the lack of rigor. In a book that claims to document broad social shifts across different media eco-systems, revolutionary changes are presumed to be self-evident, linear, and transparent.
In Cognititive Surplus, Shirky is comparably inventive. This time, the tech-savvy teenage protesters of South Korea make a prominent appearance. The South Korean example is worth discussing in detail because it highlights how easy it is to draw misleading conclusions from anecdotes.
For more than a month between May and June 2008, the streets of Seoul brimmed with tens of thousands of angry people, unhappy that newly elected president Lee Myung-Bak had lifted a five-year ban on imports of American beef. Many South Koreans felt that the ban, originally imposed because of fears of mad cow disease, had been rescinded too hastily, giving public safety a back seat to the exigencies of foreign policy.
So they took to Seoul’s parks and public squares and mounted candlelight vigils and sang “No to mad cow!” By late June, their efforts paid off: the president was forced to apologize on national television, reshuffle his cabinet, and add a few extra restrictions to the trade agreement.
Shirky zeroes in on the high-school students—most of them girls—who spearheaded the protests. He is particularly impressed to report that they learned about the ban through postings on an Internet forum dedicated to their favorite boy band. “Massed together, frightened and angry that Lee’s government had agreed to what seemed a national humiliation and a threat to public health, the girls decided to do something about it,” Shirky writes, pointing out that the band’s Web site “provided a place and a reason for Korea’s youth to gather together by the hundreds of thousands.”
For Shirky, this suggests nothing less than a revolution in revolution-making: “When teenage girls can help organize events that unnerve national governments, without needing professional organization or organizers to get the ball rolling, we are in new territory.” He uses the story to illustrate the limitations of the South Korean media in fostering such revolutionary pursuits: a similar protest would have been unimaginable in the sitcom age.
The problem isn’t just that Shirky overlooks some facts. His central narrative—people vs. corrupt and irresponsible government—blinds him to the ambiguous implications of that mix of free time and Internet access that he celebrates as “cognitive surplus.” Yes, South Korea is prosperous and wired. But it still harbors numerous social ills that information technology may aggravate.
Shirky ignores South Korea’s epidemic of Internet addiction, from which 2 million residents (4 percent of the population) reportedly suffer. (Remember the South Korean couple that let their three-month-old starve to death while they reared their virtual child?) Nor does he mention the growth of xenophobic cyber-vigilante groups that troll social-networking sites in search of evidence that foreigners who come to teach English in the country behave immorally. And Shirky is similarly oblivious to the patriotic netizens who organize cyber-attacks on Japanese Web sites over matters as petty as figure skating. More substantial issues between the two countries—like the future of the disputed Liancourt Rocks islands—result in even greater online vitriol.
If your only metric of social progress concerns who has access to what tools and at what costs, such “negative externalities” do not matter. But if you are not already a committed populist, such risks may give you pause.
The broader societal implications of Shirky’s argument are clear: universal access to tools for producing and disseminating information is the ultimate public good, even if it crowds out other such goods. To that end Shirky closes the book with a powerful—if abstract—call to arms:
We look everywhere a reader or a viewer or a patient or a citizen has been locked out of creating and sharing . . . and we’re asking. If we carve out a little bit of the cognitive surplus and deploy it here, could we make a good thing happen? (Emphasis original.) Maybe. But Shirky’s digital populism not only blinds him, McLuhan-style, to inconvenient facts, it blinds him to the immense complexities and competing values inherent in democratic societies. He says he is writing about Western democracies, but they are unrecognizable in his book, for they appear to have been sterilized completely of social conflict.
Shirky presents a world without nationalism, corruption, religion, extremism, terrorism. It is a world without any elections, and thus no need to worry about informed voters. Class, gender, and race make a few appearances, but not as venues of systemic oppression. They are just more testimony to the mainstream media’s elitism. Describing the media habits of his young students, Shirky remarks that they “have never known a world with only three television channels, a world where the only choice a viewer had in the early evening was which white man was going to read them the news in English.”
But while Shirky seems content to gloss over the deficiencies of democratic politics and declare them transformed, a more sober analyst will realize that the transformation of those politics is far from complete and in fact requires more determined popular engagement. Even in the age of the Internet, the fate of the nation depends on who organizes in the public sphere, who shows up at the voting booth, and how well-informed those people are.”