The Epistemology of Wikipedia

Episteme publishes articles on the social dimensions of knowledge from the perspective of philosophical epistemology and related social sciences.

It’s February 2009 issue (Vol. 6, no. 1) is dedicated to the epistemology of mass collaboration, and specifically, carries a number of articles dedicated to the trustworthiness of Wikipedia.

For example:

* WIKIPEDIA and the Epistemology of Testimony, by Deborah Perron Tollefsen:

“In this paper, I explore the issue of group testimony in greater detail by focusing on one putative source of testimony, that of Wikipedia. My aim is to the answer the following questions: Is Wikipedia a source of testimony? And if so, what is the nature of that source? Are we to understand Wikipedia entries as a collection of testimonial statements made by individuals, some subset of individuals, or is Wikipedia itself (the organization or the Wikipedia community) the entity that testifies?”

* The Epistemic Cultures of Science and WIKIPEDIA: A Comparison. By K. Brad Wray:

“I compare the epistemic culture of Wikipedia with the epistemic culture of science, with special attention to the culture of collaborative research in science. The two cultures differ markedly with respect to (1) the knowledge produced, (2) who produces the knowledge, and (3) the processes by which knowledge is produced. Wikipedia has created a community of inquirers that are governed by norms very different from those that govern scientists. Those who contribute to Wikipedia do not ground their claims on their reputations as knowers, for they stand to lose nothing if and when their contributions are found to be misleading or false.”

* The Fate of Expertise after WIKIPEDIA. Lawrence M. Sanger:

“explores a couple ways in which egalitarian online communities might challenge the occupational roles or the epistemic leadership roles of experts. There is little support for the notion that the distinctive occupations that require expertise are being undermined. It is also implausible that Wikipedia and its like might take over the epistemic leadership roles of experts. Section 4 argues that a main reason that Wikipedia’s articles are as good as they are is that they are edited by knowledgeable people to whom deference is paid, although voluntarily. But some Wikipedia articles suffer because so many aggressive people drive off people more knowledgeable than they are”

and finally:

* On Trusting WIKIPEDIA. P. D. Magnus:

“Given the fact that many people use Wikipedia, we should ask: Can we trust it? The empirical evidence suggests that Wikipedia articles are sometimes quite good but that they vary a great deal. As such, it is wrong to ask for a monolithic verdict on Wikipedia. Interacting with Wikipedia involves assessing where it is likely to be reliable and where not. I identify five strategies that we use to assess claims from other sources and argue that, to a greater of lesser degree, Wikipedia frustrates all of them. Interacting responsibly with something like Wikipedia requires new epistemic methods and strategies.”

The problems are explained by editor Don Fallis:

“Despite these concerns, there is much theoretical and empirical evidence that large collaborative projects, such as Wikipedia, can actually be fairly reliable (cf. Surowiecki 2004, Sunstein 2006, Page 2007, Fallis 2008). When groups are sufficiently large and diverse, they can often come up with better information than the experts on a topic. For example, when a contestant on the television show Who Wants to be a Millionaire? is stumped by a question, she can poll the studio audience or phone a friend to get some help. It turns out that consulting the collective wisdom of the audience is a much more reliable “lifeline” than consulting your smartest friend (Surowiecki 2004, 4). And this phenomenon, often referred to as the Wisdom of Crowds, seems to apply to Web 2.0 projects.5 For example, in a study sponsored by the journal Nature (Giles 2005) that involved a blind comparison by experts, the error rate for Wikipedia articles (on several scientific topics) was higher, but only slightly higher, than the error rate for Britannica articles.

In any event, large collaborative projects that produce and disseminate information and knowledge are not going away any time soon.7 Thus, it is critical to understand the epistemology of mass collaboration. Toward this end, the contributions to this issue of Episteme address the following important epistemological questions: How reliable are large collaborative projects that produce and disseminate information? What is the explanation for their reliability?

Can large collaborative projects be reliable even if they do not make use of experts?

Does the information produced by such projects count as testimony? Can we be justified in believing information produced by large collaborative projects? How should we go about deciding whether to believe information produced by such projects?”

3 Comments The Epistemology of Wikipedia

  1. AvatarMichel Bauwens

    Chris Watkins send us the following two reactions on the article by K. Brad Way:


    Found these interesting – addressing some specific concerns about Wikipedia. I basically agree with these two posts (and I’ve only read the abstract of Wray’s work which they critique, as it’s not open for non-subscribers). (The concerns raised by Michel in our recent discussion about the Wikipedia community and internal processes are a different topic – those issues do carry more weight.)


    Sage Ross blogs:, Wikipedia in theory,

    For the last few days I’ve been stewing about one of the article in the recent Wikipedia-edition of the epistemology journal Episteme. (See the Wikipedia Signpost for summaries of the articles.) I don’t find any of them particularly enlightening, but one just rubs me wrong: K. Brad Wray’s “The Epistemic Cultures of Science and Wikipedia: A Comparison”, which argues that where science has norms that allow reliable knowledge to be produced, Wikipedia has very different norms that mean Wikipedia can’t produce reliable knowledge.

    I guess it’s really just another proof of the zeroeth law of Wikipedia: “The problem with Wikipedia is that it only works in practice. In theory, it can never work.”

    Part of my problem might be that last year I blogged a comparison between Wikipedia’s epistemological methods and those of the scientific community, but came to the opposite conclusion, that in broad strokes they are actually very similar. But more than that, I think Wray’s analysis badly misrepresents both the way science works and the way Wikipedia works.

    A central piece of Wray’s argument is scientists depend on their reputations as producers of reliable knowledge for their livelihoods and careers, and so their self-interest aligns with the broader institutional interests of science. This is in contrast to Wikipedia, where mistakes have little or no consequences for their authors and where a “puckish culture”, prone to jokes and vandalism, prevails. Wray writes that “In science there is no room for jokes” such as the Seigenthaler incident.

    The idea that scientists are above putting jokes and pranks into their published work is belied by historical and social studies of science and by many scientific memoirs as well. James D. Watson’s Genes, Girls, and Gamow is the first thing that comes to mind, but there are many examples I could use to make that point. And science worked much the same way, epistemologically, long before it was a paid profession and scientists’ livelihoods depended on their scientific reputations. (I don’t want to over-generalize here, but some of the main features of the social epistemology of science go back to the 17th century, at least. See Steve Shapin’s work, which is pretty much all focused, at least tangentially, on exploring the roots and development the social epistemology of science.)

    Likewise, the idea that Wikipedia’s norms and community practices can’t be effective without more serious consequences for mistakes seems to me a wrong-headed way of looking at things. On Wikipedia, as in science, there are people who violoate community norms, and certainly personal consequences for such violations are less on Wikipedia than for working scientists. But for the most part, propagating and enforcing community norms is a social process that works even in the absence of dire consequences. And of course, just as in science, those who consistently violate Wikipedia’s norms are excluded from the community, and their shoddy work expunged.

    For a more perceptive academic exploration of why Wikipedia does work, see Ryan McGrady’s “Gaming against the greater good” in the new edition of First Monday,

    Joseph Reagle blogs: Wray and the Wrong Tree,

    I have to agree with Sage Ross on his response to Brad Wray’s The Epistemic Cultures of Science and Wikipedia: a Comparison. Wray is right to note that there are differences between scientific knowledge production and Wikipedia production in terms of the knowledge produced, who produces it, and the process. However, Wray’s article does not show any cognizance of the actual epistemic basis of Wikipedia: not a word about Neutral Point of View, No Original Research, and Verifiability. Instead, he uses Adam Smith’s invisible hand metaphor to argue that if local concern about one’s scientific reputation and career yields a global value in the production of knowledge, this cannot be claimed for Wikipedia because no one has a scientific reputation at stake. First, the invisible hand argument is not the only theory for understanding peer-production. Two, as Ross notes scientific reputation is not the only motive that might be operational under the invisible hand model — many Wikipedians are very much concerned about their peers’ opinions. Wray writes “We have very little reason to believe that an invisible hand is at work, ensuring that the truth, and only the truth, is made available” (p. 43). Smith’s hand can apply to more than scientific reputation and “truth”!? That’s simply barking up the wrong tree.

  2. Pingback: P2P Foundation » Blog Archive » Defending Wikipedia’s methodology

  3. AvatarMichel Bauwens

    Fuller summary from

    Wikipedia and the Epistemology of Testimony

    The first article, University of Memphis philosopher Deborah Perron Tollefsen’s “Wikipedia and the Epistemology of Testimony”, explores the concept of group testimony as a possible basis for understanding Wikipedia’s authority. It builds on her earlier work, in which she argues that group testimony is fundamentally different from the testimony of an individual, since the testimony of the group itself may be different from the testimony of the individuals who make it up.

    Tollefsen’s conclusion is that some Wikipedia articles might be considered a form of group testimony, particularly when they are “mature” and represent the consensus of many editors and reflect the norms of the community, but others are better thought of as the individual testimony of their main author or authors. She goes on to characterize Wikipedia as “an immature epistemic agent”, the claims of which—like a child’s testimony—ought to be scrutinized carefully, rather than given the benefit of the doubt like the testimony of an adult. However, she finds that the traditional methods of scrutinizing face-to-face testimony do not translate well to Wikipedia and other virtual testimony. Instead, she argues, “receiving testimony from a source such [as] Wikipedia involves trusting not the man, but the system.” According to another view of testimony, it is not the testifier’s reliability but the testimony itself that should be scrutinized, by comparing it to other sources and to the “vast backdrop of beliefs the hearer has acquired” beforehand. This view of the epistemology of testimony is easy to extend to Wikipedia, Tollefsen argues.

    The Epistemic Cultures of Science and Wikipedia: A Comparison

    The second Wikipedia-focused article is “The Epistemic Cultures of Science and Wikipedia: A Comparison”, by State University of New York at Oswego philosopher K. Brad Wray. In it, Wray considers Wikipedia as a community focused on inquiry and knowledge production, analogous to the scientific community. However, he draws sharp contrasts between the norms of science and motivations of scientists, on the one hand, and the norms of Wikipedia and motivations of its editors on the other. Although both science and Wikipedia are collaborative knowledge projects, they have very different “epistemic cultures”.

    Wray posits that one possible justification for trusting Wikipedia is an “invisible hand” argument: although no identifiable individual or group of individuals ensures the quality of information on Wikipedia, “the knowledge-market will take care of itself, and poor articles reporting false claims will be rooted out.” According to Wray, while science does have a viable invisible hand, in the form of a reputation system that relies on peer-review, Wikipedians “lack the sorts of incentives that keep science in good working order”, and face few or no consequences for mistakes.

    Wray also explores what he calls the Wikipedia’s “puckish culture”, prone to gossip and practical jokes. He recounts the Seigenthaler incident, and contrasts it to the sober culture of science. In science, he says, “the closest incident to such a joke is the Sokal affair”; however, the Sokal affair should not be considered a joke, but rather a demonstration of “the editors’ appalling ignorance of science”. This regrettable aspect of Wikipedia’s culture, he suggests, might be absent if—as in science—”one had to wait months before one’s contribution is posted”. Finally, against the argument of Deborah Perron Tollefsen’s article, Wray argues that even when considered as a form of testimony, Wikipedia is a flawed source of knowledge, precisely because of the failings in its “epistemic culture”.

    Despite his negative assessment, Wray does find one ray of hope: “What Wikipedia can do for us is to draw greater attention to epistemology and its relevance to our place in the social world. Though we live in a time in human history when knowledge may be easier to obtain than ever before, we are in desperate need of means to sort and evaluate what passes for knowledge.”

    The Fate of Expertise after Wikipedia

    “ Over the long term, the quality of a given Wikipedia article will do a random walk around the highest level of quality permitted by the most persistent and aggressive people who follow an article. ”

    —Lawrence M. Sanger

    A third article about Wikipedia comes from philosopher Lawrence M. Sanger—i.e., Wikipedia co-founder Larry Sanger, who is also the founder and editor-in-chief of wiki encyclopedia Citizendium. In “The Fate of Expertise after Wikipedia”, Sanger explores the paradoxes and shortcomings of Wikipedia’s relationship with experts and expertise, and suggests that Citizendium, a project that explicitly grants authority to expert contributors, is a better alternative.

    Sanger describes Wikipedia’s success as an egalitarian and open knowledge project, and then poses the question of how to reconcile the project’s successes—both real and potential—with its lack of “any special role for experts or any expert approval process”. One implication from such success might be that special roles for experts are not necessary in the rest of society either. However, Sanger shows this to be self-contradictory, in part because evaluating the success of Wikipedia requires expertise to compare it against, and in part because of Wikipedia’s indirect reliance on expertise.

    Sanger goes on to explore the actual roles of expert editors on Wikipedia, and whether Wikipedia itself could become an authoritative source without granting a special role for experts. He asserts that “Wikipedia is nothing like the egalitarian utopia its most radical defenders might have us believe”, and that in practice experts are often given deference. This, according to Sanger, is the key to what success Wikipedia has had in creating authoritative articles on some topics. However, problems arise when such deference breaks down, as is likely to occur for non-technical topics. As an a priori hypothesis, Sanger suggests that “Over the long term, the quality of a given Wikipedia article will do a random walk around the highest level of quality permitted by the most persistent and aggressive people who follow an article.” He argues that Citizendium’s model, in which subject-matter experts are given final authority over content in their areas of expertise, can surmount such problems caused by persistent and aggressive non-experts.

    Sanger’s article has attracted some press attention and blog discussion, especially for his idea about the limits of quality on Wikipedia. It was discussed on Slashdot, although Sanger suggested on the Citizendium Blog that many commentators did not “RTFA”. Sanger’s contribution was also discussed by The Chronicle of Higher Education.

    On Trusting Wikipedia

    The final Wikipedia-focused article is “On Trusting Wikipedia”, by State University of New York at Albany philosopher P. D. Magnus. Given the wide variability in article quality on Wikipedia, Magnus sets out to identify strategies for assessing reliability and to examine how well those strategies apply to Wikipedia.

    Magnus describes five common strategies for assessing the reliability of other online knowledge sources, all of which fail to some extent when applied to Wikipedia.

    * Authority may be a good basis for evaluating online sources when the individual or institutional authors have relevant connections or reputations; for the Wikipedia, this breaks down because of the anonymity of many authors, as well as the changing nature of articles that have been vouched for by outside authorities at a particular point in time.

    * Plausibility of style—whether or not an author seems to understand the style and terminology of the topic at hand—can be a useful indicator of reliability; on Wikipedia, this is confounded by collaborative copy-editing, which may improve only the style of bad content or introduce implausible elements of style in otherwise good content.

    * Plausibility of content—watching out for things that are obviously wrong—similarly fails on Wikipedia, because the most egregious errors, which might serve as a warning against additional subtle errors, are also the most likely to be corrected—leaving undetected errors behind.

    * Calibration—testing a subset of claims against an independent source of known reliability to gauge a source’s overall accuracy—is also difficult for Wikipedia content, since easily-checked claims are also the ones most likely to have been checked by other Wikipedia users, while harder-to-check claims may be less reliable.

    * Sampling—checking any given claim against multiple other sources—can also sometimes be misleading with Wikipedia content, since Wikipedia is widely reproduced and changes frequently; two seemingly independent sources may both be derived from the same Wikipedia content and contain the same errors.

    Magnus concludes that “teaching people to engage Wikipedia responsibly will require getting them to cultivate a healthy scepticism, to think of it differently than they think of traditional sources, and to learn to look beyond the current articles”.

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.