Reliability vs. efficiency (2): system reliability should take precedence over efficiency

We continue Roberto Verzola’s important contribution to a ‘green’ p2p economics theory.

After the critique of the efficiency criterion, he focuses on system reliability as a better criterion for economic development.

We publish the first part of the article, and refer to the original for the more detailed technical elaboration which follows our excerpt.

Roberto Verzola::

Reliability is closely related to risk, which is usually defined as the probability of failure multiplied by the estimated cost of the failure.

Reliability and failure

There are many ways of defining socio-economic failure. Even an extremely affluent society like the U.S. shows many signs of failure. Homelessness, unemployment, imprisonment, broken families, and poverty are examples of the failure of the U.S. system. For those who want a single measure of economic failure, below-subsistence income is one possible candidate.

Given a system’s output over time, one would average the output, and divide it by the average input over a period of time to get the system’s average efficiency. System failure can be defined as an instance when output goes below a minimum threshold value. To determine reliability, one would then note all instances of failure and take the mean (average) time between failures (MTBF).[3]

Note that efficiency highlights the gain in output, while reliability highlights the risk of failure. While the two are related, they are not the same. High efficiency can be achieved under unreliable conditions, and high reliability can be achieved under inefficient conditions.

For instance, a system that experiences frequent failures of extremely short duration can have low reliability without significantly reducing its efficiency. As the duration of each failure approaches zero, the reduction in efficiency becomes negligible. Such a system is highly efficient but very unreliable. Another system can have a much lower output than the first example but if it seldom fails, then it is a highly reliable but very inefficient system.

In this paper, a strategy that improves on efficiency as well as the amount of input will be called a gain-improving strategy, while one that improves on reliability and the cost of failure will be called a risk-reducing strategy. Where the computational capabilities of economic agents allow it, these strategies may evolve into gain-maximizing and risk-minimizing strategies, respectively.

When efficiency and reliability conflict

In the engineering and design sciences, efficiency and reliability are two design considerations which often conflict, because reliability can usually be improved (e.g., through modularization or through redundancy) at the expense of efficiency. Reliability is often seen as equally important, and in many cases the more important of the two, so that efficiency often takes second priority until the desired level of reliability is reached. In many designs, higher output is important, but preventing failure is even more important.

In software, for example, while efficient programs are desirable, designers warn that efficiency should never be sought at the expense of reliability.

In the design of bridges, buildings, dams, integrated circuits, spacecrafts, communication systems and so on, reliability is right there at the top of the list of design criteria, above or beside efficiency.
Is this debate applicable to economics?

Economies today are as much a product of social engineering and conscious design as they are a result of unplanned evolutionary development. Thus, it makes sense to review the lessons of engineering and systems design and ask whether some of the theories and methods of these disciplines may give useful insights into economic policy and decision-making.

For instance, economies are systems which contain feedback and will therefore benefit from the insights of feedback theory. Economies are complex systems which occasionally fail and will therefore benefit not only from the insights of systems designers who have successfully created extremely complex but highly reliable hardware as well as software systems, but also from the lessons of systems which have failed miserably. It is as much from these failures as from the successes in minimizing the risk of failure that designers have extracted their heuristics for successful systems design.

It is now acknowledged, for instance, that many pre-industrial communities tend to minimize risk when optimizing their resources. It is interesting to observe how this clashes with the approach of modern corporations, which would optimize these same resources by maximizing gain. We can expect that the optimum level of resource-use from the gain-maximizing firm’s viewpoint will tend to be higher than the optimum level from the risk-minimizing communities’ viewpoint. Thus, to firms and other gain-maximizers, the local resources would seem under-utilized, while the communities themselves would believe their resources are already optimally-used.

This insight helps clarify the source of many corporate-versus-community resource conflicts that are so common in the countryside.

Improving reliability: the modular approach

The standard approach in designing a complex system for reliability is called modularization, i.e., break up the system into subsystems which are relatively independent from each other and which interact with each other only through well-defined, carefully-designed interfaces. Modularization is used both in hardware and software design.

The logic behind modularization is simple. In a system of many components, the number of possible pair interactions rises faster than the increase in the number of components, as the following table shows.”

More here.

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.