First, some background: In the chapter of my book Cyberchiefs on governance in the English-language Wikipedia, there is a section, as in all my case studies, which deals with conflict. Within this section there is a sub-section which focuses on the management of disruptive users, particularly people who create fake identities, known in Wikipedia as “sock puppets”. In the course of this discussion I recounted a well-known incident on the English-language Wikipedia where an editor was banned from the encyclopedia because an administrator, who at the time specialised in uncovering “socks”, believed that this editor’s actions corresponded to the profile of a disruptive user trying to infiltrate the project. In fact, the editor had left Wikipedia for personal reasons, had then come back under a different pseudonym, and was contributing usefully. Once the error was revealed the ban was quickly corrected, but this administrative action generated a lot of discussion about justice, policing and authority in the English-language Wikipedia community.
In my book, I did not disguise the Wikipedia-pseudonym of the administrator at the centre of the controversy. I sincerely regret any distress that this description may have caused this person. Future editions of the book will correct this situation. I have conflicting views about the ethics of online peer project research. But before detailing these views, I need to set the record straight, because of some misleading statements about my book. The easiest way to dispel such statements would simply be for people to read the relevant chapter. However, this may not always be possible. I therefore reaffirm that the events outlined above represent a small part of a case study on expertise and governance in Wikipedia, not the main subject of this case study. Furthermore, I affirm that this case study only deals with the issue of online harassment in passing – there are two brief references to it in the entire chapter – and that it does not in any way identify or analyse the administrator in question as a victim of harassment.
And now: What do Wikipedia’s unique features, both in terms of its democratic promise and its socio-technical characteristics, mean for the ethics of Internet research?
Wikipedia research necessarily involves dealing with conflict. This stems from the fact that governance in peer projects depends on the diffusion of decision-making. When combined with fuzzy guidelines such as “notability” the result is innumerable decision-makers with their own take on the rules. As previously discussed here, wiki-conflict has structural, epistemic, psychological, etc, causes. Wiki-conflict also relates to the disparities in competencies between users, some of which (known as administrators or “sysops”) can protect or delete pages, and block other editors. Though no statistics exist, it is safe to assume that the majority of admins use their tools in a responsible fashion. At the same time, the project’s development relies in part on the constant entry of enthusiastic “newbies”. The herding of these novice autonomous content providers by more experienced users along normative policy lines can generate resentment and the feeling of injustice, in the shape of participants who feel they have been ill treated, or even humiliated. The problem is compounded when experienced users dispose of administrative tools. Further, if such situations involve friendship cliques, there is an increased likelihood of abuses of administrative authority.
Consequences of this dynamic include the rise of the proportion of policy and regulatory discussion in relation to mainspace content (Kittur et al 2007); and the increasingly higher likelihood of edits by unregistered editors or ordinary editors being reverted than edits by members of the administrative elite. This last fact may be having a chilling or discouraging effect on recruitment, as the tremendous increase in numbers of participants appears to be tapering off (Suh et al 2009). To complicate matters further, the perceived lack of accountability of some so-called “abusive admins” is a grievance frequently expressed on “watchdog sites” such as Wikipedia Review or “parody sites” such as Encyclopedia Dramatica. These sites’ relentless monitoring and criticism of Wikipedia has generated cycles of mutual demonisation: Wikipedia Review and Encyclopedia Dramatica denounce the malfeasances of the “Wikipedia elite” whilst Wikipedians justify the banning of links to these frequently puerile or obscene “hypercritical” or “attack” sites because of their inappropriate revelation of personal information about Wikipedia editors and administrators.
How then should researchers deal with conflict on Wikipedia? The orthodoxic position in Internet Research ethics is to stress the Internet’s blurring of the categories of the private and the public, signifying that technical accessibility does not equal publicness, and making anonymity and informed prior consent necessary (King 1996, Waskul and Douglass 1996). Others have gone further, advocating the search for a consensus so that researchers enable research subjects to correct or change what is written about them before publication (Allen 1996), or work together with research subjects to produce research (Bakardjieva and Feenberg 2001) by practising “open source ethics” (Berry 2004).
Susan Herring (1996) raised a number of objections to this stance, which are all relevant to Wikipedia research:
(a) False anonymity: since the Internet is a written medium, in publicly archived and indexed projects it is trivial to perform a search and find the authors of a particular quote, or the protagonists in a particular situation. Anonymising subjects would therefore simply be a convention, a means of protecting the researcher’s ethical credentials, whilst allowing the identification of subjects (this is particularly the case in a wiki where all changes to the archive are automatically recorded).
(b) Lack of verifiability: how can results be reproduced by other researchers if distinguishing features are scrubbed out?
(c) The question of scale: in large projects, who should the researcher seek prior consent from? In the case of Wikipedia, literally hundreds of people may opine during a conflict.
(d) Finally, Herring flagged the possible censoring of critical research: how can researchers conduct legitimate critical research (in Herring’s case, she was investigating gender bias in email lists) if prior consent is sought out? Would informing subjects of the research project not entice them to modify the very behaviour which the researcher is documenting? In particular, what of participants who wield power over other users?
Wikipedia also has unique socio-technical features which differentiate it from the MUDs and discussion lists early Internet researchers dealt with. Non-disruptive Wikipedians do not participate in the project to share personal stories and experiences, find emotional support, experiment with identity, or play: they participate to write an encyclopedia following strict editorial and technical design rules. Wikipedia is a working environment; but it is also imbued with a strong pseudo-legal culture. When participants are deemed to be disruptive or when a conflict starts to heat up, specific guidelines and institutions come into play. The supreme conflict-resolution body on Wikipedia is the Arbitration Committee or “ArbCom”. The ArbCom invites witnesses to provide testimonies, gathers evidence, and adjudicates through (secret) votes. Evidence on Wikipedia takes the form of “diffs “ (the “difference” between two versions of a page showing a new edit) which must be produced whenever a claim is made about the actions of an editor.
Whilst new or inexperienced users may not be aware that all edits on Wikipedia can potentially be subsequently referred to, the same cannot be said of experienced editors and particularly of administrators. It is precisely the mastery of the socio-technical forms of evidence presentation that enables experienced editors to present convincing cases during disputes. Wikipedia thus has a culture of public “rational-critical” discussion (Hansen et al. 2009) where experienced editors expect their words and actions to be be evaluated and criticised.
In short: on the one hand, everyone has a right to privacy, and the “golden rule” of doing no harm should be respected. On the other hand, Wikipedia operates as a transparent workshop and tribunal. Important events on Wikipedia are so well-known within the community, and so easily searchable from without, that it is unclear to what extent disguising real names or pseudonyms is a guarantor of anonymity. In addition, the dilemma posed by Herring (1996) has lost none of its salience: the ethics of not doing harm to subjects needs to be balanced to the ethics of potentially not addressing injustice unearthed by research.
The early days of Internet Research saw a host of stimulating examinations of the emergence of commons-based legal systems in MUDs and MOOs (see Maltz 1996, Mnookin 1996, Lemley 1997, Perritt 1997). To my knowledge, there has been little examination of Wikipedia’s internal legal structure. I think this is a telling sign. The discussion of the legality of administrative actions implies an examination of particular cases and decisions. Since Wikipedia-law is (a) unstable, as it can potentially be challenged and rewritten and (b) constantly used as a tool in a hornet’s nest of micro-conflicts, it is understandable that legal scholars would hesitate to comment. But more broadly, this points to the difficulty with practising what Macek (2006) calls “radical media scholarship”, understood as “politically-motivated research on the media which attempts to understand the world in order to change it and which is typically informed by Marxism, materialist feminism, radical political economy, critical sociology and social movement theory” (1031-1032).
If the point of such critical research is to have some impact on reality, it is difficult to see how this could be achieved without referring to specific examples of practices and procedures – which then runs the risk of identifying individuals, even when their identity is disguised, for the reasons outlined above. Should researchers, then, strictly obey the “golden rule” by only conducting quantitative analysis at the macro level (“there may be cases of abusive authorities because of structural factors x, y and z”), thereby staying out of Wikipedia’s embodied arrangements of power? Or is a hybrid form of critical research, in collaboration with the Wikipedia elite, possible?
Note: Thanks to David Berry for stimulating my thinking on this topic.
Allen C (1996) “What’s Wrong with the “Golden Rule”? Conundrums of Conducting Ethical Research in Cyberspace”, The Information Society 12 (2): 175-187.
Bakardjieva M and A Feenberg (2001) “Involving the Virtual Subject”, Ethics and Information Technology 2 (4): 233-240.
Berry D (2004) “Internet Research: Privacy, Ethics and Alienation: an Open-Source Approach”, Internet Research 14 (4): 323-332.
Hansen S, N Berente and K Lyytinen (2009) “Wikipedia, Critical Social Theory and the Possibility of Rational Discourse”, The Information Society 25 (1): 38-59.
Herring S (1996) “Linguistic and Critical Analysis of Computer-Mediated Communication: Some Ethical and Scholarly Considerations”, The Information Society, 12 (2): 153-168.
King S (1996) “Researching Internet Communities: Proposed Ethical Guidelines for the Reporting of Results”, The Information Society 12 (2): 119-128.
Kittur A, B Suh, B Pendleton and E Chi (2007) “He Says, She Says: Conflict and Coordination in Wikipedia”, in Proceedings of the Conference on Human Factors in Computing Systems, San José, CA, 28 April–3 May: 453-462.
Lemley M (1997) “The Law and Economics of Internet Norms”, Chicago-Kent Law Review 73: 1257-1294.
Macek S (2006) “Divergent Critical Approaches to New Media”, New Media & Society 8 (6): 1031-1038.
Maltz T (1996) “Customary Law and Power in Internet Communities”, Journal of Computer-Mediated Communication 2 (1) June 1996.
Mnookin J.L. (1996) “Virtual(ly) Law: The Emergence of Law in LambdaMOO”, Journal of Computer-Mediated Communication, 2 (1).
Perritt H (1997) “Cyberspace Self-Government: Town-Hall Democracy or Rediscovered Royalism?”, Berkeley Technology Law Journal 12: 413-475.
Santana, A and D Wood (2009) “Transparency and Social Responsibility Issues for Wikipedia”, Ethics and Information Technology 11 (2): 133-144.
Suh B, G Convertino, E Chi, and P Pirolli (2009) “The Singularity is Not Near: Slowing Growth of Wikipedia?”, International Symposium on Wikis and Open Collaboration (WikiSym), 25-27 October, Orlando, FL.
Viégas F, M Wattenberg M and D Kushal, “Studying Cooperation and Conflict Between Authors with History Flow Visualisations”, CHI 2004, 24–29 April 2004, Vienna.
Wascul D and M Douglass (1996) “the Electronic Participant: Some Polemical Observations on the Ethics of On-line Research”, The Information Society 12 (2): 129-140.