The Desktop Regulatory State Chapt. 2: Systems Disruption

[This is the fourth installment in my serialization of the first three chapters of my book-in-progress, tentatively titled Desktop Regulatory State]

IV. Systems Disruption

The dynamics of competition between networks and hierarchies lead to what John Robb calls “systems disruption.” Networks, despite much smaller resources than those which hierarchies can field, are able to leverage those resources through focused attacks on key nodes or weak points that achieve incapacitation many times greater than the apparent damage.

Because of their agility and the nature of network organization itself, they are able to route around damage much faster than hierarchies.
But perhaps the most important advantage of networks is the way hierarchies respond to attack. Hierarchies typically respond to network attacks by adopting policies that hasten their own destruction. Brafman and Backstrom stated the general principle, as we saw earlier, that “when attacked, a decentralized organization tends to become even more open and decentralized.” On the other hand, “when attacked, centralized organizations tend to become even more centralized.” Hierarchies respond to attacks by becoming even more hierarchical: more centralized, more authoritarian, and more brittle. As a result they become even less capable of responding flexibly to future attacks, actively suppressing their own ability to respond effectively.

Al Qaeda has adopted an explicit strategy of “open-source warfare,” using relatively low-cost and low-risk attacks, whose main damage will come not from the attacks but from the U.S. government’s reaction to them. In its slick English language e-zine Inspire, aimed at an American readership, it announced:

To bring down America we do not need to strike big. …[With the] security phobia that is sweeping America, it is more feasible to stage smaller attacks that involve less players and less time to launch.

Robb, in the blog post from which the quote above was excerpted, cited additional material from Inspire on the thinking behind the recent parcel bomb attack:

Al Qaeda’s choice of a demonstration was to use parcel bombs (called Operation Hemorrhage—a classic name for a systems disruption attack). These low cost parcel bombs, were inserted into the international air mail system to generate a security response by western governments. It worked. The global security response to this new threat was massive….

Part of effective systems disruption is a focus on ROI (return on investment) calculations.

And Al Qaeda, in its commentary at Inspire, made it clear that ROI calculations were very much on its mind:

Two Nokia phones, $150 each, two HP printers, $300 each, plus shipping, transportation and other miscellaneous expenses add up to a total bill of $4,200. That is all what Operation Hemorrhage cost us… On the other hand this supposedly ‘foiled plot’, as some of our enemies would like to call [it], will without a doubt cost America and other Western countries billions of dollars in new security measures.

So Al Qaeda’s deliberate strategy is pretty much to goad the U.S. into doing something stupid—usually a safe gamble.

Al Qaeda spokesman Adam Gadahn explicitly stated, in a March 2010 video statement, that the U.S. government’s response to “failed” attacks, and the resulting economic damage, was their whole point:

Even failed attacks can help the jihadists by “bring[ing] major cities to a halt, cost[ing] the enemy billions, and send[ing] his corporations into bankruptcy.” Failed attacks, simply put, can themselves be successes. This is precisely why AQAP devoted an entire issue of Inspire to celebrating terror attempts that killed nobody.

All the other supposedly “failed” attacks on air travel have been resounding successes, by this standard. From Richard Reed’s “shoe bomb” to the alleged liquid explosives in shampoo bottles, to the so-called “underwear bomber” on Christmas 2009, every single failed attack results in an enormously costly and reactive knee-jerk TSA policy—resulting in increased inefficiencies and slowdowns and ever more unpleasant conditions for travelers—to prevent that specific mode of attack from ever happening again. It doesn’t matter whether it works or not, or if the person attempting it is a complete and total dickhead. So we have to take off our shoes, leave our shampoo and bottled water at home—and most recently, choose between being ogled and groped. Every such new measure amounts to a new tax on air travel, and results in yet another small but significant group of travelers on the margin deciding it’s the last straw. After the TSA required checked baggage to be screened, for example, air travel dropped by 6% between 4th Quarter 2002 and 1st Quarter 2003. Air travel on Thanksgiving 2010 was down about a tenth from the figure in 2009, which probably owes something to the public furor over the new body scanners and “enhanced patdowns.”

It’s only a matter of time till some Al Qaeda cell is smart enough to allow one its agents to get “caught” with explosives in his rectum (or her vagina), and—if TSA reacts according to pattern—the whole civil aviation system dissolves into chaos.

The same approach is shared by bureaucracies in the “peaceful” world of corporations, universities and government agencies, as described (in the case of an academic science department) by blogger “thoreau”:

If you make it costly to go through Official Channels, people will find ways to do things outside of Official Channels. Most of what they do will be harmless. However, some of it won’t be. By driving the activity underground you guarantee the following:

1) Harmful activities will not be spotted except through chance or when there’s An Incident. And we all know what bureaucracies do when there’s An Incident.

2) There will be no chance to work with people on making their activities safe, because they won’t come to you in advance. The only chance you’ll have to talk to them is when they get caught by chance (at which point they’ll be more focused on doing a better job of keeping secrets) or when there’s An Incident (at which point their main concern will be deflection of blame).

3) The institutional culture will develop an even greater disdain for Rules and even (in many cases) for Safety. Given the realities of how these things work out so frequently, disdain for Rules and even Safety (in most cases) is largely a healthy thing. However, to the extent that a bureaucrat actually values these things, that bureaucrat should try to make it so that doing things through Official Channels is cheaper than skipping Official Channels. That’s your only hope of getting people to actually respect these things. Well, there’s also fear, but fear isn’t respect. It’s mindless, panicked compliance, and it can fade over time, or motivate people to find even better evasive tactics.

Another thought on when there’s An Incident: Besides all of the usual problems with incentives and information in large institutions, it occurs to me that size guarantees that the people responsible for Safety, Compliance, and related matters will be separated from the people on the ground doing whatever it is that the organization is allegedly there to do. Consequently, the person who enforces a ridiculous rule, or who makes you sit through a useless presentation full of statements that are at best insulting and at worst factually wrong, will not be having lunch with you. Often the local enforcers (especially people whose primary task is something other than Safety) are more reasonable than the distant enforcers because, frankly, they need to be. Yes, their access to local information leads to smarter decisions, and they have at least some sort of incentive to see that the job gets done (whereas the distant enforcers only care about Compliance). But they also can’t afford to piss everyone else off (too much) because they will be having lunch with everyone else. If they insult everyone else with a boring and factually wrong Powerpoint, they’ll be ostracized.

Hierarchies degrade their own effectiveness in another way, as well: by becoming less capable of preventing future attacks. 9/11, as Robb pointed out, was a Black Swan event: i.e., it was a one-off occurrence that could not have been predicted with any degree of confidence, and which is unlikely to be repeated. And most subsequent new kinds of attack, like the “shoe bomber” and “underwear bomber,” were of similar nature. The surveillance state, in increasing the scope of its data collection in order to anticipate such events, simply increases the size of the haystack relative to the needle and generates lost of false positives. Even when there is fairly high quality, actionable intelligence specifically pointing to some imminent threat, like the warning from the underwear bomber’s uncle, the system is so flooded with noise that it doesn’t notice the signal. Given the very large pool of individuals who are generally sympathetic to Al Qaeda’s cause or who fit some generic “terrorist” personality profile, and given the very small number of people who are actively and deliberately involved in planning terror attacks, it’s inevitable that genuinely dangerous suspects will be buried 99.9-to-0.1 in a flood of false positives. As Matt Yglesias argues,

Out of the six billion people on the planet only a numerically insignificant fraction are actually dangerous terrorists. Even if you want to restrict your view to one billion Muslims, the math is the same. Consequently, tips, leads and the like are overwhelmingly going to be pointing to innocent people. You end up with a system that’s overwhelmed and paralyzed. If there were hundreds of thousands of al-Qaeda operatives trying to board planes every year, we’d catch lots of them. But we’re essentially looking for needles in haystacks.

…the key point about identifying al-Qaeda operatives is that there are extremely few al-Qaeda operatives so (by Bayes’ theorem) any method you employ of identifying al-Qaeda operatives is going to mostly reveal false positives….

…If you have a 99.9 percent accurate method of telling whether or not a given British Muslim is a dangerous terrorist, then apply it to all 1.5 million British Muslims, you’re going to find 1,500 dangerous terrorists in the UK. But nobody thinks there are anything like 1,500 dangerous terrorists in the UK. I’d be very surprised if there were as many as 15. And if there are 15, that means your 99.9 percent accurate method is going to get you a suspect pool that’s overwhelmingly composed of innocent people. The weakness of al-Qaeda’s movement, and the very tiny pool of operatives it can draw from, makes it essentially impossible to come up with viable methods for identifying those operatives.

One commenter under thoreau’s post, apparently equating bureaucratic safety rules with safety considerations as such, earnestly reminded him of the importance of safety, recounting a serious accident caused by “a physicist who thought he knew what he was doing.” In response, thoreau continued:

I don’t think that safety in the lab is a joke, Eli.

I think that most of the safety training sessions that I’ve sat through were worthless, that many of the procedures are more focused on covering bureaucratic ass than on helping people do things safely, and that anybody who relies on the safety officers to tell him how to be safe (as opposed to learning everything he can about the apparatus that he’s using, and learning from other people’s experiences with similar apparatuses) is the one auditioning for a Darwin Award.

I think that clowns who say “Look! Somebody almost died in some other context!” as soon as somebody criticizes a safety rule (I’ve dealt with such people) are the ones who lack the critical thinking ability to think through a situation and make good choices.

A student once left a harmless chemical in a refrigerator that had food. This refrigerator was NOT in a lab. Again, the refrigerator was NOT in a lab. Please re-read that sentence as many times as you deem necessary.

I will be the first to say that the student should be severely chastised and learn a very harsh lesson. Not because there was anything remotely dangerous about the situation, but because the student needs to learn good habits if he is going to avoid truly dangerous situations. (In fact, I was hoping that Samuel L. Jackson might get involved, and say something about the path of the righteous man, just to really make the lesson as dramatic as possible.)

Instead, the response was to take away the refrigerator. A refrigerator that was NOT in a laboratory room. A refrigerator that was in fact in an office. One joker even tried to ban food from the room before I pushed back. Again, the room was NOT a laboratory. It was a shared office area.

And when I said that this was stupid, do you know what the response was? Some idiot pointed out that a student had died in a fire in a chemistry lab at another school. As if that had anything to do with this.

What did the student learn? The student learned that if you get caught people will do stupid things. The teachable moment was tainted.

At this point it is customary for somebody to point out that a person once died or nearly died in some other situation. As if that had anything to do with this.

All of this together means that attempts to anticipate and prevent terror attacks through the bloated surveillance state, or to prevent attacks through standardized policies like shoe removal and “enhanced patdowns,” amount to nothing more than an elaborate—but practically worthless—feel-good ritual (no pun intended). It’s the placebo effect—or in Bruce Schneier’s memorable phrase, “security theater.”

When your system for anticipating attacks upstream is virtually worthless, achieving defense in depth with the “last mile” becomes monumentally important: having people downstream capable of recognizing and thwarting the attempt, and with the freedom to use their own discretion in stopping it, when it is actually made. Since 9/11, all the major failed terror attacks in the U.S. were thwarted by the vigilance and initiative of passengers directly in contact with the situation. The underwear bomber was stopped by passengers who took the initiative to jump out of their seats and take the guy down. And the official response to every failed terror attack has been to further restrict the initiative and discretion of passengers in direct contact with the situation.

Perhaps the best recent example of systems disruption is Wikileaks, whose founder Julian Assange Robb describes as “one of the most important innovators in warfare today.” A number of commentators have noted that the U.S. government’s response to Wikileaks is directly analogous to the TSA’s response to Al Qaeda attacks on civil aviation and the RIAA’s response to file-sharing. For example Mike Masnick of Techdirt, in a juxtaposition of articles that probably wasn’t coincidental (even the titles are almost identical), wrote on the same day that “the TSA’s security policies are exactly what Al Qaeda wants,” and that both the TSA and Wikileaks stories showed

how a system based on centralization responds to a (very, very different) distributed threat. And, in both cases, the expected (and almost inevitable) response seems to play directly into the plans of those behind the threat….

…It’s what happens when a centralized system, based on locking up information and creating artificial barriers, runs smack into a decentralized, open system, built around sharing. For those who are trying to understand why this whole story reminds me of what’s happened in the entertainment industry over the past decade, note the similarities. It’s why I’ve been saying for years that the reason I’ve spent so much time discussing the music industry is because it was an early warning sign of the types of challenges that were going to face almost every centralized industry or organization out there. That included all sorts of other industries, but it also includes governments.

Assange’s stated goal is to destroy or degrade the effectiveness of hierarchies, not through direct damage from attack, but by their own responses to attack. He starts by describing as “conspiratorial” authoritarian institutions which encounter resistance to their goals, and therefore find it necessary to conceal their operations to some extent. (It would be remarkable if the people who routinely dismiss “conspiracy theories” would not admit the phenomenon Assange describes.)

The more secretive or unjust an organization is, the more leaks induce fear and paranoia in its leadership and planning coterie. This must result in minimization of efficient internal communications mechanisms (an increase in cognitive “secrecy tax“) and consequent system-wide cognitive decline resulting in decreased ability to hold onto power as the environment demands adaption.

Hence in a world where leaking is easy, secretive or unjust systems are nonlinearly hit relative to open, just systems. Since unjust systems, by their nature induce opponents, and in many places barely have the upper hand, mass leaking leaves them exquisitely vulnerable to those who seek to replace them with more open forms of governance.

Blogger Aaron Bady describes the double bind into which this imperative puts an authoritarian institution:

The problem this creates for the government conspiracy then becomes the organizational problem it must solve: if the conspiracy must operate in secrecy, how is it to communicate, plan, make decisions, discipline itself, and transform itself to meet new challenges? The answer is: by controlling information flows. After all, if the organization has goals that can be articulated, articulating them openly exposes them to resistance. But at the same time, failing to articulate those goals to itself deprives the organization of its ability to process and advance them. Somewhere in the middle, for the authoritarian conspiracy, is the right balance of authority and conspiracy.

This means that “the more opaque it becomes to itself (as a defense against the outside gaze), the less able it will be to “think” as a system, to communicate with itself.”

The leak… is only the catalyst for the desired counter-overreaction; Wikileaks wants to provoke the conspiracy into turning off its own brain in response to the threat. As it tries to plug its own holes and find the leakers, he reasons, its component elements will de-synchronize from and turn against each other, de-link from the central processing network, and come undone.

There’s a great scene in Stephen King’s The Stand, where Randall Flagg, the Antichrist-figure in charge of a post-apocalyptic regime ruled from Las Vegas, confronts Paul Robeson, his chief of secret police. Robson let several of the good guys’ spies escape to report back to their compatriots in the Boulder Free Zone—only because he wasn’t on the cc list for Flagg’s list of “persons of interest.” Robeson wasn’t on the “need to know” list. Flagg held his cards too close to his chest because he didn’t trust his subordinates.

So public embarrassment resulting from the cable leaks is not the end, but the means to the end. The end is not embarrassment, but the authoritarian state’s reaction to such embarrassment:

…Assange is not trying to produce a journalistic scandal which will then provoke red-faced government reforms or something, precisely because no one is all that scandalized by such things any more. Instead, he is trying to strangle the links that make the conspiracy possible, to expose the necessary porousness of the American state’s conspiratorial network in hopes that the security state will then try to shrink its computational network in response, thereby making itself dumber and slower and smaller.

The effect, a degrading of synaptic connections within the hierarchical organization, is analogous to the effect of Alzheimer’s Disease on the human brain.

Noam Scheiber at The New Republic argues that Wikileaks is “about dismantling large organizations—from corporations to government bureaucracies. It may well lead to their extinction.” In language much like Assange himself, he argues that as an organization grows, the pool of potential leakers grows at the very same time as their personal bonds of loyalty to each other and the organization weaken. Hence

Wikileaks is, in effect, a huge tax on internal coordination. And, as any economist will tell you, the way to get less of something is to tax it. As a practical matter, that means the days of bureaucracies in the tens of thousands of employees are probably numbered.

There are two options for dealing with this. The first, to suppress leaks and tighten up internal control, is probably impossible in the long run. Which leaves the second option:

….to shrink. I have no idea what size organization is optimal for preventing leaks, but, presumably, it should be small enough to avoid wide-scale alienation, which clearly excludes big bureaucracies. Ideally, you’d want to stay small enough to preserve a sense of community, so that people’s ties to one another and the leadership act as a powerful check against leaking. My gut says it’s next to impossible to accomplish this with more than a few hundred people. The Obama campaign more or less managed it with a staff of 500. But the record of presidential campaigns (one industry where the pressure to leak has been intense for years) suggests that’s about the upper limit of what’s possible.

I’d guess that most organizations a generation from now will be pretty small by contemporary standards, with highly convoluted cell-like structures. Large numbers of people within the organization may not even know one another’s name, much less what colleagues spend their days doing, or the information they see on a regular basis. There will be redundant layers of security and activity, so that the loss of any one node can’t disable the whole network. Which is to say, thanks to Wikileaks, the organizations of the future will look a lot like … Wikileaks.

Recall our discussion above of the “secrecy tax” which self-censorship and internal authoritarianism imposes on hierarchies. Robb, in Brave New War, refers to a “terrorism tax” on a city resulting from

an accumulation of excess costs inflicted on a city’s stakeholders by acts of terrorism. These include direct costs inflicted on the city by terrorists (systems sabotage) and indirect costs because of the security, insurance, and policy changes needed to protect against attacks. A terrorism tax above a certain level will force the city to transition to a lower market equilibrium (read: shrink).

In particular, a “terrorism tax” of 6.3 to 7 percent will overcome the labor-pooling and transportation savings advantage of concentrating population sufficiently to compel the city to move to a lower population equilibrium.

Similarly, the excess costs imposed on hierarchies by the imperatives of conflict with hostile networks will act as a tax on them, compelling them to move to a lower size equilibrium. And increased levels of disobedience and disregard of government authority, and increased transaction costs of enforcing the law, will function as a disobedience tax. As a result, simply put, the advantages of hierarchy will be outweighed by the disadvantages at a lower size threshold. Large hierarchical institutions, both state and corporate, will become increasingly hollow, unable to enforce their paper claims to authority.

Hierarchies are entering a very brutal period of natural selection, in which some will be supplanted from outside by networks, and some (those which survive) will become more network-like under outside pressure. The hierarchies which survive will be those which, faced with pressure from systems disruption, adapt (in Eric Raymond’s phrase) by decentralizing their functions and hardening their local components. Hierarchies will face pressure to become less authoritarian internally, as they find themselves competing with networks for the loyalty of their workers. The power of exit will reinforce the power of voice.

This natural selection process is inevitable, even without intentionally malicious attacks by networks on hierarchies. Eric Raymond argues that the prevailing bureaucratic, hierarchical institutions of the 20th century were more or less workable, and capable of functioning based on Weberian rules and “best practices,” so long as the complexity of the problems they faced was not insupportable. Even in those days, of course, there were significant efficiency tradeoffs in return for control. In James C. Scott’s terminology, rendering the areas managed by hierarchies “legible” to those at the top entailed a level of abstraction and oversimplification that severely limited the functionality of the leadership’s understanding of the world. “The categories that they employ are too coarse, too static, and too stylized to do justice to the world that they purport to describe.”

And the process of rendering the functioning of the managed areas legible, through standard operating procedures and best practices, also entailed disabling or hindering a great deal of the human capital on which an organization depended for optimal functioning. The proper functioning of any organization depends heavily on what Friedrich Hayek called “distributed knowledge,” and what Michael Polanyi called “tacit knowledge.” It is direct, practical knowledge of the work process, which cannot be reduced to a verbal formula and transmitted apart from practical experience of the work. It is also practical knowledge of the social terrain within the organization, and the network of personal relationships it’s necessary to navigate in order to get anything done. Scott uses the Greek term metis, as opposed to techne. Bureaucratic micromanagement, interference, and downsizing, between them, decimate the human capital of the organization—much like the eradication of social memory in elephant herds where a large enough portion of the elderly matriarchs have been destroyed to disrupt the transmission of social mores.

For all these efficiency losses, from the hierarchy’s perspective they are necessary tradeoffs for the sake of acquiring and maintaining power. Reality must be abstracted into a simple picture, and specialized knowledge known only to those actually doing the work must be eradicated—not only to make the organization simple enough to be manageable by a finite number of standard rules, but because the information rents entailed in tacit/distributed knowledge render the lower levels less easily milked.

But today, the complexity of problems faced by society has become so insupportable that hierarchies are simply incapable of even passably coping with it. As Scott points out, the policies of bureaucratic hierarchies have always been made by people who “ignore the radical contingency of the future” and fail to account for the possibility of incomplete knowledge. But contingency and incompleteness have increased exponentially in recent years, to levels with which only a stigmergic organization can cope.

Eric Raymond argues that the level of complexity in American society, in the mid-20th century, was such that it could be managed—if not effectively, at least more or less adequately—by the meritocratic managerial classes using Weberian-Taylorist rules to govern large bureaucratic organizations. But if Gosplan and Bob McNamara could manage to stumble along back then, the level of unsupportable complexity in recent decades has outstripped the ability of hierarchical, managerial organizations to manage.

Meanwhile, hierarchies’ responses to network attacks are self-destructive in another way besides the “secrecy tax.” They undermine their own perceived legitimacy in the eyes of the public. For one thing, they undermine their moral legitimacy by behaving in ways that directly contradict their legitimizing rhetoric. As Martin van Creveld argued, when the strong fight the weak they become weak—in large part because the public can’t stomach the knowledge of what goes into their sausage. The public support on which the long-run viability of any system of power depends is eroded by loss of morale.

The reason is that when the strong are seen beating the weak (knocking down doors, roughing up people of interest, and shooting ragtag guerrillas), they are considered to be barbarians. This view, amplified by the media, will eventually eat away at the state’s ability to maintain moral cohesion and drastically damage its global image. [John Robb, Brave New War]

We saw this with the public reaction to Abu Ghraib and Guantanamo. And every video of an Israeli bulldozer flattening a Palestinian home with screaming mother and children outside undermines the “beleaguered Israeli David vs. Arab Goliath” mystique on which so much third party support depended. The “David vs. Goliath” paradigm is replaced by one of the Warsaw Ghetto vs. the Nazis, with the Israelis in the role of bad guys.

But more importantly, networked resistance undermines the main source of legitimacy for all authoritarian institutions, which is their “plausible premise”—their ability to deliver the goods in return for loyalty and compliance. Every attack against a hierarchy, to which it demonstrates its inability to respond effectively, undermines its grounds for expecting loyalty. It’s one thing to sell one’s soul to the Devil in return for a set of perks. But when the Devil is unable to deliver the goods, he’s in trouble.

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.