The emergence of algorithmic authority

Algorithmic authority is the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying “Trust this because you trust me.” This model of authority differs from personal or institutional authority. Algorithmic authority handles the “Garbage In, Garbage Out” problem by accepting the garbage as an input, rather than trying to clean the data first; it provides the output to the end user without any human supervisor checking it at the penultimate step; and these processes are eroding the previous institutional monopoly on the kind of authority we are used to in a number of public spheres, including the sphere of news.

Excerpts from a thoughtpiece by Clay Shirky:

Algorithmic authority is the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying “Trust this because you trust me.” This model of authority differs from personal or institutional authority, and has, I think, three critical characteristics.

First, it takes in material from multiple sources, which sources themselves are not universally vetted for their trustworthiness, and it combines those sources in a way that doesn’t rely on any human manager to sign off on the results before they are published. This is how Google’s PageRank algorithm works, it’s how Twitscoop’s zeitgeist measurement works, it’s how Wikipedia’s post hoc peer review works. At this point, its just an information tool.

Second, it produces good results, and as a consequence people come to trust it. At this point, it’s become a valuable information tool, but not yet anything more.

The third characteristic is when people become aware not just of their own trust but of the trust of others: “I use Wikipedia all the time, and other members of my group do as well.” Once everyone in the group has this realization, checking Wikipedia is tantamount to answering the kinds of questions Wikipedia purports to answer, for that group. This is the transition to algorithmic authority.

As the philosopher John Searle describes social facts, they rely on the formulation X counts as Y in C — in this case, Wikipedia comes to count as an acceptable source of answers for a particular group.

There’s a spectrum of authority from “Good enough to settle a bar bet” to “Evidence to include in a dissertation defense”, and most uses of algorithmic authority right now cluster around the inebriated end of that spectrum, but the important thing is that it is a spectrum, that algorithmic authority is on it, and that current forces seem set to push it further up the spectrum to an increasing number and variety of groups that regard these kinds of sources as authoritative.

There are people horrified by this prospect, but the criticism that Wikipedia, say, is not an “authoritative source” is an attempt to end the debate by hiding the fact that authority is a social agreement, not a culturally independent fact. Authority is as a authority does.

It’s also worth noting that algorithmic authority isn’t tied to digital data or even late-model information tools. The design of Wikileaks and Citizendium and Apache all use human vetting by actors prized for their expertise as a key part of the process. What seems important is that the decision to trust Google search, say, can’t be explained as a simple extension of previous models. (Whereas the old Yahoo directory model was, specifically, an institutional model, and one that failed at scale.)

As more people come to realize that not only do they look to unsupervised processes for answers to certain questions, but that their friends do as well, those groups will come to treat those resources as authoritative. Which means that, for those groups, they will be authoritative, since there’s no root authority to construct from. (I lied before. It’s not turtles all the way down; its a network of inter-referential turtles.)

Now there are boundary problems with this definition, of course; we trust spreadsheet tools to handle large data sets we can’t inspect by eye, and we trust scientific results in part because of the scientific method. Also, although Wikipedia doesn’t ask you to trust particular contributors, it is not algorithmic in the same way PageRank is. As a result, the name may be better replaced by something else.

But the core of the idea is this: algorithmic authority handles the “Garbage In, Garbage Out” problem by accepting the garbage as an input, rather than trying to clean the data first; it provides the output to the end user without any human supervisor checking it at the penultimate step; and these processes are eroding the previous institutional monopoly on the kind of authority we are used to in a number of public spheres, including the sphere of news.”

It is useful to place Clay’s analysis in the context of a typology provided by Henry Jenkins, which shows that algorithmic authority is just one of three methods to create authority through collective intelligence.

Henry Jenkins:

“We can argue that there are a range of different models of collective intelligence shaping the digital realm at the present time. We might distinguish broadly between three different models:

1) An aggregative model which assumes that we can collect data based on the autonomous and anonymous decisions of “the crowd” and use it to gain insights into their collective behavior. This is the model which shapes Digg and to some degree, YouTube.

2) a curatorial model where grassroots intermediaries seek to represent their various constituencies and bring together information that they think is valuable. This is the model which shapes the blogosphere.

3) a deliberative model where many different voices come together, define problems, vet information, and find solutions which would be impossible for any individual to achieve. This is the model shaping Wikipedia or even more powerfully alternate universe games. Of the three, the deliberative model offers the most democratic potentials, especially when it is tempered by ethical and political commitments to diversity. This is the model which Pierre Levy describes in his book, Collective Intelligence. Levy’s account stresses the affirmative value placed on diversity in such a culture. The more diverse the community, the broader range of possible information and insights can inform the deliberative process.”

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.