[I’m down to the final stretch of serializing The Homebrew Industrial Revolution: A Low Overhead Manifesto, at Michel Bauwens’ kind invitation. We begin the series of excerpts from the seventh and last chapter. I’ll do this in three installments, since it’s one of the more important chapters in the book]
We have seen the burdens of high overhead that the conventional, hierarchical enterprise and mass-production industry carry with them, their tendency to confuse the expenditure of inputs with productive output, and their culture of cost-plus markup. Running throughout this book, as a central theme, has been the superior efficiency of the alternative economy: its lower burdens of overhead, its more intensive use of inputs, and its avoidance of idle capacity.
Two economies are fighting to the death: one of them a highly-capitalized, high-overhead, and bureaucratically ossified conventional economy, the subsidized and protected product of one and a half century’s collusion between big government and big business; the other a low capital, low-overhead, agile and resilient alternative economy, outperforming the state capitalist economy despite being hobbled and driven underground.
The alternative economy is developing within the interstices of the old one, preparing to supplant it. The Wobbly phrase “building the structure of the new society within the shell of the old” is one of the most fitting phrases ever conceived for summing up the concept.
A. Networked Production and the Bypassing of Corporate Nodes
One of the beauties of networked production, for subcontractors ranging from the garage shop to the small factory, is that it transforms the old corporate headquarters into a node to be bypassed.
Johan Soderberg suggests that the current model of outsourcing and networked production makes capital vulnerable to being cut out of the production process by labor….
…Networked capital turns every point of production, from the firm down to the individual work assignment, into a node subject to circumvention. …[I]t is capital’s ambition to route around labour strongholds that has brought capitalism into network production…. Nations, factories, natural resources, and positions within the social and technical division of labour, are all made subject to redundancy. Thus has capital annulled the threat of blockages against necks in the capitalist production chain, upon which the negotiating power of unions is based.
But this redundancy created by capital as a way of routing around blockages, Soderberg continues, threatens to make capital itself redundant:
…Since all points of production have been transformed into potentially redundant nodes of a network, capital as a factor of production in the network has itself become a node subject to redundancy. [Hacking Capitalism]
(This was, in fact, what happened in the Third Italy: traditional mass-production firms attempted to evade the wave of strikes by outsourcing production to small shops, and were then blindsided when the shops began to federate among themselves.)…. [Sabel and Piore, Second Industrial Divide]
Dave Pollard, writing from the imaginary perspective of 2015, made a similar observation about the vulnerability of corporations that follow the Nike model of hollowing themselves out and outsourcing everything:
In the early 2000s, large corporations that were once hierarchical end-to-end business enterprises began shedding everything that was not deemed ‘core competency’, in some cases to the point where the only things left were business acumen, market knowledge, experience, decision-making ability, brand name, and aggregation skills. This ‘hollowing out’ allowed multinationals to achieve enormous leverage and margin. It also made them enormously vulnerable and potentially dispensable.
As outsourcing accelerated, some small companies discovered how to exploit this very vulnerability. When, for example, they identified North American manufacturers outsourcing domestic production to third world plants in the interest of ‘increasing productivity’, they went directly to the third world manufacturers, offered them a bit more, and then went directly to the North American retailers, and offered to charge them less. The expensive outsourcers quickly found themselves unnecessary middlemen…. The large corporations, having shed everything they thought was non ‘core competency’, learned to their chagrin that in the connected, information economy, the value of their core competency was much less than the inflated value of their stock, and they have lost much of their market share to new federations of small entrepreneurial businesses.
The worst nightmare of the corporate dinosaurs is that, in an economy where “imagination” or human capital is the main source of value, the imagination might take a walk: that is, the people who actually possess the imagination might figure out they no longer need the company’s permission, and realize its “intellectual property” is unenforceable in an age of encryption and bittorrent (the same is becoming true in manufacturing, as the discovery and enforcement of patent rights against reverse-engineering efforts by hundreds of small shops serving small local markets becomes simply more costly than it’s worth)…
B. The Advantages of Value Creation Outside the Cash Nexus
We already examined, in Chapters Three and Five, the tendencies toward a sharp reduction in the number of wage hours worked and increased production of value in the informal sector. From the standpoint of efficiency and bargaining power, this has many advantages.
On the individual level, a key advantage of the informal and household economy lies in its offer of an alternative to wage employment for meeting a major share of one’s subsistence needs, and the increased bargaining power of labor in what wage employment remains.
How much does the laborer increase his freedom if he happens to own a home, so that there is no landlord to evict him, and how much still greater is his freedom if he lives on a homestead where he can produce his own food?
That the possession of capital makes a man independent in his dealings with his fellows is a self-evident fact. It makes him independent merely because it furnishes him actually or potentially means which he can use to produce support for himself without first securing the permission of other men. [Ralph Borsodi, Prosperity and Security]
Ralph Borsodi demonstrated some eight decades ago—using statistics!—that the hourly “wage” from gardening and canning, and otherwise replacing external purchases with home production, is greater than the wages of most outside employment. [This Ugly Civilization]
…Compared to the fluctuation in value of financial investments, Borsodi writes,
the acquisition of things which you can use to produce the essentials of comfort—houses and lands, machines and equipment—are not subject to these vicissitudes…. For their economic utility is dependent upon yourself and is not subject to change by markets, by laws or by corporations which you do not control. [Ibid.]
The home producer is free from “the insecurity which haunts the myriads who can buy the necessaries of life only so long as they hold their jobs.” [Ibid.] A household with no mortgage payment, a large garden and a well-stocked pantry might survive indefinitely (if inconveniently) with only one part-time wage earner….
C. More Efficient Extraction of Value from Inputs
John Robb uses STEMI compression, an engineering analysis template, as a tool for evaluating the comparative efficiency of his proposed Resilient Communities:
In the evolution of technology, the next generation of a particular device/program often follows a well known pattern in the marketplace: its design makes it MUCH cheaper, faster, and more capable. This allows it to crowd out the former technology and eventually dominate the market (i.e. transistors replacing vacuum tubes in computation). A formalization of this developmental process is known as STEMI compression:
Space. Less volume/area used.
Energy. Less energy. Higher efficiency.
Mass. Less waste.
Information. Higher efficiency. Less management overhead.
So, the viability of a proposed new generation of a particular technology can often be evaluated based on whether it offers a substantial improvement in the compression of all aspects of STEMI without a major loss in system complexity or capability. This process of analysis also gives us an “arrow” of development that can be traced over the life of a given technology.
The relevance of the concept, he suggests, may go beyond new generations of technology. “Do Resilient Communities offer the promise of a generational improvement over the existing global system or not?”
In other words: is the Resilient Community concept (as envisioned here) a viable self-organizing system that can rapidly and virally crowd out existing structures due to its systemic improvements? Using STEMI compression as a measure, there is reason to believe it is:
Space. Localization (or hyperlocalization) radically reduces the space needed to support any given unit of human activity. Turns useless space (residential, etc.) into productive space.
Time. Wasted time in global transport is washed away. JIT (just in time production) and place.
Energy. Wasted energy for global transport is eliminated. Energy production is tied to locality of use. More efficient use of solar energy (the only true exogenous energy input to our global system).
Mass. Less systemic wastage. Made to order vs. made for market.
Information. Radical simplification. Replaces hideously complex global management overhead with simple local management systems.
The contrast between Robb’s Resilient Communities and the current global system dovetails, more or less, with that between our two economies. And his STEMI compression template, as a model for analyzing the alternative economy’s superiorities over corporate capitalism, overlaps with a wide range of conceptual models developed by other thinkers. Whether it be Buckminster Fuller’s ephemeralization, or lean production’s eliminating muda and “doing more and more with less and less,” the same general idea has a very wide currency.
A good example is what Mamading Ceesay calls the “economies of agility.” The emerging postindustrial age is a “network age where emerging Peer Production will be driven by the economies of agility.”
Economies of scale are about driving down costs of manufactured goods by producing them on a large scale. Economies of agility in contrast are about quickly being able to switch between producing different goods and services in response to demand.
If the Toyota Production System is a quantum improvement on Sloanist mass-production in terms of STEMI compression and the economics of agility, and networked production on the Emilia-Romagna model is a similar advancement on the TPS, then the informal and household economy is an order of magnitude improvement on both of them….
By these standards, the alternative economy that we saw emerging from the crises of state capitalism in previous chapters is capable of eating the corporate-state economy for lunch. Its great virtue is its superior efficiency in using limited resources intensively, as opposed to mass-production capitalist industry’s practice of adding subsidized inputs extensively. The alternative economy reduces waste and inefficiency through the greater efficiency with which it extracts use-value from a given amount of land or capital.
An important concept for understanding the alternative economy’s more efficient use of inputs is “productive recursion,” which Nathan Cravens uses to refer to the order of magnitude reduction in labor required to obtain a good when it is produced in the social economy, without the artificial levels of overhead and waste associated with the corporate-state nexus. Savings in productive recursion include (say) laboring to produce a design in a fraction of the time it would take to earn the money to pay for a proprietary design, or simply using an open source design; or reforging scrap metal at a tenth the cost of using virgin metal….
He cites, from Neil Gershenfeld’s Fab, a series of “cases that prove the theory of productive recursion in practice.” One example is the greatly reduced cost for cable service in rural Indian villages, “due to reverse engineered satellite receivers by means of distributed production.” Quoting from Fab:
A typical village cable system might have a hundred subscribers, who pay one hundred rupees (about two dollars) per month. Payment is prompt, because the “cable-wallahs” stop by each of their subscribers personally and rather persuasively make sure that they pay. Visiting one of these cable operators, I was intrigued by the technology that makes these systems possible and financially viable.
A handmade satellite antenna on his roof fed the village’s cable network. Instead of a roomful of electronics, the head end of his cable network was just a shelf at the foot of his bed. A sensitive receiver there detects and interprets the weak signal from the satellite, then the signal is amplified and fed into the cable for distribution around the village. The heart of all this is the satellite receiver, which sells for a few hundred dollars in the United States. He reported that the cost of his was one thousand rupees, about twenty dollars….
According to Marcin Jakubowski of Open Source Ecology, the effects of productive recursion are cumulative. “Cascading Factor 10 cost reduction occurs when the availability of one product decreases the cost of the next product.”7 We already saw, in Chapter Five, the specific case of the CEB Press, which can be produced for around 20% of the cost of purchasing a competing commercial model.
Amory Lovins and his coauthors, in Natural Capitalism, described the cascading cost savings (“Tunneling Through the Cost Barrier”) that result when the efficiencies of one stage of design reduce costs in later stages. Incremental increases in efficiency may increase costs, but large-scale efficiency improvements in entire designs may actually result in major cost reductions. Improving the efficiency of individual components in isolation can be expensive, but improving the efficiency of systems can reduce costs by orders of magnitude.
Much of the art of engineering for advanced resource efficiency involves harnessing helpful interactions between specific measures so that, like loaves and fishes, the savings keep on multiplying. The most basic way to do this is to “think backward,” from downstream to upstream in a system. A typical industrial pumping system, for example…, contains so many compounding losses that about a hundred units of fossil fuel at a typical power station will deliver enough electricity to the controls and motor to deliver enough torque to the pump to deliver only ten units of flow out of the pipe—a loss factor of about tenfold.
But turn those ten-to-one compounding losses around backward…, and they generate a one-to-ten compounding saving. That is, saving one unit of energy furthest downstream (such as by reducing flow or friction in pipes) avoids enough compounding losses from power plant to end use to save about ten units of fuel, cost, and pollution back at the power plant.
To take another example, both power steering and V-8 engines resulted from Detroit’s massive increases in automobile weight in the 1930s, along with marketing-oriented decisions to add horsepower that would be idle except during rapid acceleration. The introduction of lightweight frames, conversely, makes possible the use of much lighter internal combustion engines or even electric motors, which in turn eliminate the need for power steering.
Most of the order-of-magnitude efficiencies of whole-system design that Lovins et all describe result, not from new technology, but from more conscious use of existing technology: what Edwin Land called “the sudden cessation of stupidity” or “stopping having an old idea.” Simply combining existing technological elements in the most effective way can result in efficiency increases of Factor Four, Factor Eight, or more….
….The inefficiencies that result from an inability to “think backward” are far more likely to occur in a stovepiped organizational framework, where each step or part is designed in isolation by a designer whose relation to the overall process is mediated by a bureaucratic hierarchy. For example, in building design:
Conventional buildings are typically designed by having each design specialist “toss the drawings over the transom” to the next specialist. Eventually, all the contributing specialists’ recommendations are integrated, sometimes simply by using a stapler.
This approach inevitably results in higher costs, because increased efficiencies of a single step taken in isolation generally are governed by a law of increased costs and diminishing returns. Thicker insulation, better windows, etc., cost more than their conventional counterparts. Lighter materials and more efficient engines for a car, similarly, cost more than conventional components. So optimizing the efficiency of each step in isolation follows a rising cost curve, with each marginal improvement in efficiency of the step costing more than the last. But by approaching design from the perspective of a whole system, it becomes possible to “tunnel through the cost barrier”:
When intelligent engineering and design are brought into play, big savings often cost less up front than small or zero savings. Thick enough insulation and good enough windows can eliminate the need for a furnace, which represents an investment of more capital than those efficiency measures cost. Better appliances help eliminate the cooling system, too, saving even more capital cost. Similarly, a lighter, more aerodynamic car and a more efficient drive system work together to launch a spiral of decreasing weight, complexity and cost. The only moderately more efficient house and car do cost more to build, but when designed as whole systems, the superefficient house and car often cost less than the original, unimproved versions.
…The trick is to “do the right things in the right order”:
…if you’re going to retrofit your lights and air conditioner, do the lights first so you can make the air conditioner smaller. If you did the opposite, you’d pay for more cooling capacity than you’d need after the lighting retrofit, and you’d also make the air conditioner less efficient because it would either run at part-load or cycle on and off too much.
This is also a basic principle of lean production: most costs come from five percent of point consumption needs, and from scaling the capacity of the load-bearing infrastructure to cover that extra five percent instead of just handling the first ninety-five percent. It ties in, as well, with another lean principle: getting production out of sync with demand (including the downstream demand for the output of one step in a process), either spatially or temporally, creates inefficiencies. Optimizing one stage without regard to production flow and downstream demand usually involves expensive infrastructure to get an in-process input from one stage to another, often with intermediate storage while it is awaiting a need. The total resulting infrastructure cost greatly exceeds the saving at individual steps. Inefficient synchronization of sequential steps in any process results in bloated overhead costs from additional storage and handling infrastructure….
Vinay Gupta described some of the specific efficiencies involved in productive recursion, that combine to reduce the alternative economy’s costs by an order of magnitude.10 The most important efficiency comes from distributed infrastructure which provides
the same class of services that are provided by centralized systems like the water and power grids, but without the massive centralized investments in physical plant. For example, dry toilets and solar panels can provide high quality services household by household without a grid….
Distributed infrastructure also benefits from “economies of agility,” as opposed to the enormous capital outlays in conventional blockbuster investments that must frequently be abandoned as “sunk costs” when the situation changes or funding stops. “…[H]alf a dam is no dam at all, but 500 of 1000 small projects is half way to the goal.”…
We also saw, in Chapter Five, the ways that modular design and the forms of stigmergic organization facilitated by open-source design contribute to lower costs. Modular design is a way of getting more bang for the R&D buck by maximizing use of a given innovation across an entire product ecology, and at the same time building increased redundancy into the system through interchangeable parts….
Malcolm Gladwell‘s “David vs. Goliath” analysis of military history is an excellent illustration of the economies of agility. Victory goes to the bigger battalions about seven times out of ten—when Goliath outnumbers David ten to one, that is. But when the smaller army, outnumbered ten to one, acknowledges the fact and deliberately chooses unconventional tactics that target Goliath’s weaknesses, it actually wins about six times out of ten. “When underdogs choose not to play by Goliath’s rules, they win…” Guerrilla fighters from J.E.B. Stuart to T. E. Lawrence to Ho Chi Minh have learned, as General Maurice de Saxe put it, that victory is about legs rather than arms…. Another good example is what the U.S. military (analyzing Chinese asymmetric warfare capabilities) calls “Assassin’s Maces”: “anything which provides a cheap means of countering an expensive weapon.” A good example is the black box that transmits ten thousand signals on the same frequency used by SAM missiles, and thus overwhelms American air-to-surface missiles which target SAM radio signals….
In theory, it’s fairly obvious what the U.S. national security establishment needs to do. All the assorted “Fourth Generation Warfare” doctrines are pretty much agreed on that. It has to reconfigure itself as a network, more decentralized and agile than the network it’s fighting, so that it can respond quickly to intelligence and small autonomous units can “swarm” enemy targets from many directions at once. The problem is, it’s easier said than done. Al Qaeda had one huge advantage over the U.S. national security establishment: Osama bin Laden is simply unable to interfere with the operations of local Al Qaeda cells in the way that American military bureaucracies interfere with the operations of military units. No matter what 4GW doctrine calls for, no matter what the slogans and buzzwords at the academies and staff colleges say, it will be impossible to do any of it so long as the military bureaucracy exists because military bureaucracies are constitutionally incapable of restraining themselves from interference. Robb describes the problem. He quotes Jonathan Vaccaro‘s op-ed from the New York Times:
In my experience, decisions move through the process of risk mitigation like molasses. When the Taliban arrive in a village, I discovered, it takes 96 hours for an Army commander to obtain necessary approvals to act. In the first half of 2009, the Army Special Forces company I was with repeatedly tried to interdict Taliban. By our informal count, however, we (and the Afghan commandos we worked with) were stopped on 70 percent of our attempts because we could not achieve the requisite 11 approvals in time.
For some units, ground movement to dislodge the Taliban requires a colonel’s oversight. In eastern Afghanistan, traveling in anything other than a 20-ton mine-resistant ambush-protected vehicle requires a written justification, a risk assessment and approval from a colonel, a lieutenant colonel and sometimes a major. These vehicles are so large that they can drive to fewer than half the villages in Afghanistan. They sink into wet roads, crush dry ones and require wide berth on mountain roads intended for donkeys. The Taliban walk to these villages or drive pickup trucks.
The red tape isn’t just on the battlefield. Combat commanders are required to submit reports in PowerPoint with proper fonts, line widths and colors so that the filing system is not derailed. Small aid projects lag because of multimonth authorization procedures….
Robb adds his own comments on just how badly the agility-enhancing potential of network technology is sabotaged….:
* New communications technology isn’t being used for what it is designed to do (enable decentralized operation due to better informed people on the ground). Instead it is being used to enable more complicated and hierarchical approval processes — more sign offs/approvals, more required processes, and higher level oversight. For example: a general, and his staff, directly commanding a small strike team remotely.
Another example of the same phenomenon is the way the Transportation Security Administration deals with security threats: as the saying goes, by “always planning for the last war.”
First they attacked us with box cutters, so the TSA took away anything even vaguely sharp or pointy. Then they tried (and failed) to hurt us with stuff hidden in their shoes. So the TSA made us take off our shoes at the checkpoint. Then there was a rumor of a planned (but never executed) attack involving liquids, so the TSA decided to take away our liquids.
Distributed infrastructure benefits, as well, from what Robb calls “scale invariance”: the ability of the part, in cases of system disruption, to replicate the whole. Each part conserves the features that define the whole, on the same principle as a hologram….
Distributist writer John Medaille pointed out, by private email, that the Israelites under the Judges were a good example of superior extraction of value from inputs. At a time when the “more civilized” Philistines dominated most of the fertile valleys of Palestine, the Israelite confederacy stuck to the central highlands. But their “alternative technology,” focused on extracting more productivity from marginal land, enabled them to make more intensive use of what was unusable to the Philistines.
The tribes clung to the hilltops because the valleys were “owned” by the townies (Philistines) and the law of rents was in full operation. The Hebrews were free in the hills, and increasingly prosperous, both because of their freedom and because of new technologies, namely contoured plowing and waterproof cement, which allowed the construction of cisterns to put them through the dry season.
The alternative economy, likewise, has taken for its cornerstone the stone which the builders refused. As I put it in a blog post (in an admittedly grandiose yet nevertheless eminently satisfying passage):
…[T]he owning classes use less efficient forms of production precisely because the state gives them preferential access to large tracts of land and subsidizes the inefficiency costs of large-scale production. Those engaged in the alternative economy, on the other hand, will be making the most intensive and efficient use of the land and capital available to them. So the balance of forces between the alternative and capitalist economy will not be anywhere near as uneven as the distribution of property might indicate.
If everyone capable of benefiting from the alternative economy participates in it, and it makes full and efficient use of the resources already available to them, eventually we’ll have a society where most of what the average person consumes is produced in a network of self-employed or worker-owned production, and the owning classes are left with large tracts of land and understaffed factories that are almost useless to them because it’s so hard to hire labor except at an unprofitable price. At that point, the correlation of forces will have shifted until the capitalists and landlords are islands in a mutualist sea—and their land and factories will be the last thing to fall, just like the U.S Embassy in Saigon.
Soderberg refers to the possibility that increasing numbers of workers will “defect from the labour market” and “establish means of non-waged subsistence,” through efficient use of the waste products of capitalism…. [Hacking Capitalism]
…[T]he same open-source insurgency model that governs the file-sharing movement is spreading to encompass the development of all kinds of measures for routing around planned obsolescence and the other irrationalities of corporate capitalism. The reason for the quick adaptability of fourth generation warfare organizations, as described by John Robb, is that any innovation developed by a particular cell becomes available to the entire network. And by the same token, in the file-sharing world, it’s not enough that DRM be sufficiently hard to circumvent to deter the average user. The average user need only use Google to benefit from the superior know-how of the geek who has already figured out how to circumvent it. Likewise, once anyone figures out how to circumvent any instance of planned obsolescence, their hardware hack becomes part of a universally accessible repository of knowledge.
As Cory Doctorow notes, cheap technologies which can be modularized and mixed-and-matched for any purpose are just lying around. “…[T]he market for facts has crashed. The Web has reduced the marginal cost of discovering a fact to $0.00.” He cites Robb’s notion that “[o]pen source insurgencies don’t run on detailed instructional manuals that describe tactics and techniques.” Rather,they just run on “plausible premises.” You just put out the plausible premise—i.e., the suggestion based on your gut intuition, based on current technical possibilities, that something can be done—that IED’s can kill enemy soldiers, and then anyone can find out how to do it via the networked marketplace of ideas, with virtually zero transaction costs.
But this doesn’t just work for insurgents — it works for anyone working to effect change or take control of her life. Tell someone that her car has a chip-based controller that can be hacked to improve gas mileage, and you give her the keywords to feed into Google to find out how to do this, where to find the equipment to do it — even the firms that specialize in doing it for you.
In the age of cheap facts, we now inhabit a world where knowing something is possible is practically the same as knowing how to do it.
This means that invention is now a lot more like collage than like discovery.
Doctorow mentions Bruce Sterling’s reaction to the innovations developed by the protagonists of his (Doctorow’s) Makers: “There’s hardly any engineering. Almost all of this is mash-up tinkering.” Or as Doctorow puts it, it “assembles rather than invents.”
It’s not that every invention has been invented, but we sure have a lot of basic parts just hanging around, waiting to be configured. Pick up a $200 FPGA chip-toaster and you can burn your own microchips. Drag and drop some code-objects around and you can generate some software to run on it. None of this will be as efficient or effective as a bespoke solution, but it’s all close enough for rock-n-roll.
Murray Bookchin anticipated something like this back in the 1970s, writing in Post-Scarcity Anarchism:
Suppose, fifty years ago, that someone had proposed a device which would cause an automobile to follow a white line down the middle of the road, automatically and even if the driver fell asleep…. He would have been laughed at, and his idea would have been called preposterous…. But suppose someone called for such a device today, and was willing to pay for it, leaving aside the question of whether it would actually be of any genuine use whatever. Any number of concerns would stand ready to contract and build it. No real invention would be required. There are thousands of young men in the country to whom the design of such a device would be a pleasure. They would simply take off the shelf some photocells, thermionic tubes, servo-mechanisms, relays, and, if urged, they would build what they call a breadboard model, and it would work. The point is that the presence of a host of versatile, reliable, cheap gadgets, and the presence of men who understand all their cheap ways, has rendered the building of automatic devices almost straightforward and routine. It is no longer a question of whether they can be built, it is a question of whether they are worth building.