This post by Steven Hill was originally published on steven-hill.com

Recently I was having a pleasant conversation in Berlin with a highly-regarded professor and former university president when the conversation turned unexpectedly disagreeable. We hit an impasse as we discussed the impact of new digital technologies on society and the economy, specifically the impact on jobs and the labor force. Apparently, and unintentionally, I had upped the ante.

“If the technologies might be so harmful as to eliminate millions of jobs, maybe we should…not allow that to happen?” I tentatively suggested. “Maybe we should…ban the worst offenders?”

The professor looked horrified. Many forward-thinking intellectuals have learned a brand of 21st-century optimism that regards technology as the great modernizer and giver of prosperity. It’s become an article of our Enlightenment-based faith, a core value that I have generally shared. My professor blinked like an owl.

“No, no, of course we can’t do that,” he sputtered. “In the long run, technology has always created more jobs than it destroys. Always. History has shown that technology has been an indisputable good for [sic] mankind. We cannot refuse technology.”

But is this true? Has technology always been an indisputable good? Must the march of technology always be relentless? What if we discover that a particular technology poses a threat to human society? Certainly we have concluded that atomic and chemical weapons are such dangers, and so their existence is closely monitored. The same with genetic cloning. Or how about steroids in sports, a biotechnology that has infected everything from professional baseball and football in the US to the Tour de France. We closely regulate and sometimes even ban certain technologies that pose risks.

But what if it’s a technology that will deep-six jobs or degrade the quality of labor, or even damage the human worker in tragic, existential ways? Previously we decided that slave labor, and even child labor, violated core human values.

Our transatlantic values are about to be challenged mightily once again. The advance of new digital technologies, including robots, automation and artificial intelligence, combined with Internet-based labor platforms, the gig economy and other incarnations, proposes several possible futures — some enlightened and progressive, others dark and foreboding. Who will benefit from this encroaching pre-eminence? That is one of the outstanding questions of our age.

Certainly the promise of medical breakthroughs, greater energy efficiency, self-driving vehicles and astounding communication and entertainment capabilities proposes one possible future that is as awe-inspiring as it is head-spinning.

But the central dilemma of this impending brave new world can be summarized in a simple thought experiment:  what if “smart” machines and robots could perform every single job there is to do, so that no human had to work at formal employment ever again? Who would reap the benefit of this unimaginably enormous productivity increase?  Would the gains be broadly distributed to the general public? Or would they flow to a handful of “masters of the universe” — the owners and managers of these technologies?

Nobody’s crystal ball can tell us the answer, but what we do know is this: over the last two decades, the economies in Germany, the US and in virtually all developed nations have been restructured so that the wealth gains from new technology and productivity gains have flowed into the pockets of an ever smaller minority of wealthy people. We also know that the wages of average workers have stagnated, despite sizable increases in business profits (with much of those profits hidden in overseas tax havens). It hasn’t happened at the same pace everywhere, but the differences have been one of degree. So recent history clearly shows that the general public is in no way guaranteed to benefit from technological innovations. Unless we can figure out how to harness this digital horizon for the good of all, the continuation of the post-World War II “broadly shared prosperity” is in jeopardy.

I love my own personal technology – smart phone, laptop, tablet, YouTube and more – but blind faith in technology is not a policy…it is a disavowal of responsibility toward the future. We have to begin questioning certain assumptions before they become too hard-wired into our economic outlook. A lot of the coming political battle—and make no mistake, it will be a battle—will be over our societal notion of economic efficiency, which has become one of the cultural foundations of our modern world. How are we to define efficiency in a modern economy?

I remember when I was being interviewed by an Amsterdam television show, Netwerk, and a three-person team of producers arrived for the interview to be conducted inside a charmingly narrow bell-gable house overlooking one of the city’s quaint canals. A member of the team held in front of me a rather large boom mic with one of those hairy-looking sweater hoods, so I discreetly asked him, “Why don’t you give me a lapel mic? That way you won’t have to hold that big thing.” He looked at me with a bit of a Rembrandt twinkle and said, “If I do that, I’m out of a job.”

The brilliance of his small insight immediately struck me. Despite my having been deeply rubbed from an early age with the salts of American capitalist norms, I was forced to ponder the consequences of my ingrained “invisible hand” philosophy. What good is market “efficiency” if it results in more unemployment? Is it possible that one can be so efficient that you actually become inefficient?

A famous story illustrates a similar point in a larger macroeconomic way. In 1955, Walter Reuther, head of the United Auto Workers in the US, told of a visit to a newly automated plant owned by Ford Motor Company. The host, Henry Ford II, grandson of the founder, pointed to all the robots and mockingly asked the head of one of the nation’s largest labor unions: “Walter, how are you going to collect union dues from all those robots?” Without skipping a beat, Reuther replied: “Henry, how are you going to get them to buy your cars?”

Eventually the economic logic of paying Ford’s workers more so that they could afford to buy a car prevailed. Ford created more customers for his own company, a virtuous circle, a mutually benevolent—and efficient—feedback loop, now referred to as “Fordism.”

The logic of Fordism prevailed in the transatlantic economies for decades, but lately it has eluded the near-sighted designers of the new digital economy.  CEOs today want a labor force they can turn off and on like a light switch. Indeed, the tech gurus are almost boastful about how their inventions are injecting software and algorithms into ever smarter machines which are replacing the humans. The dirty little secret of Silicon Valley is that its leading companies, which are at the cutting edge of inventing the latest technologies, are not creating huge numbers of jobs.

Facebook (12,000 direct, full-time employees), Google (60,000 direct employees) and even Apple (66,000 direct employees) are slacker job creators compared to traditional economy companies like BMW, Daimler, GM, Ford, Volkswagen, Siemens and GE, which each employ hundreds of thousands of people. The newest startup kids on the block, Uber and Airbnb, each directly employ only several thousand regular employees. But that core oversees an army of freelancers and contractors, most working part-time as low paid drivers and innkeepers. Dozens of new “sharing economy” companies employ millions of freelancers today in a range of occupations and industries.

The model corporation today is no longer the vertical, industrial giants of decades past, such as the auto companies; it is these leaner companies that use online technology to reduce their regular workforce. The best example is Upwork, an online labor platform where a mere 250 regular employees use technology to oversee an army of 10 million freelancers from all over the world. A vast range of professional and skilled occupations can be found for hire on Upwork, but here’s the catch:  developed world workers from the US, Germany and elsewhere bid for jobs alongside workers from India, Philippines, China and Romania. It is an online labor auction, in which cheap Third World labor underbids developed world wages in a race to the bottom. As self-employed freelancers, these workers must constantly hustle to find the next paying gig, and have no job security, health care or other social security benefits. Upwork marks a dramatic escalation in the next phase of Silicon Valley-Wall Street capitalism, taking the logic of globalization and the Internet and intensifying it to the point where workers are subjected to new levels of vulnerability on a “virtual shop floor.” Reflecting on this trend, economist Nouriel Roubini says, “The factory of the future may be 1,000 robots and one worker manning them.”

I have asked a number of German labor market experts, as well as representatives from federal ministries, “How many German workers are earning income on Upwork?” None of them know. Yet it took me only 20 minutes to go to the Upwork website and determine that there are over 18,000 Germans listed on that platform. Known as clickworkers, they are flying under the radar of German authorities, and most likely none of their income is taxed for income tax or social security purposes. And that’s just one labor platform, there are a ton of them today, including Amazon’s Mechanical Turk, Airbnb, CrowdFlower, Belgium’s List Minut.com, Germany’s Clickworker, AppJobber, WorkHub and more. Various estimates have found anywhere from a million to 2.3 million German clickworkers working on these labor platforms. Using conservative numbers, that suggests €4 billion in lost income that is conceivably going untaxed, and nearly €600 million not being paid into the health care fund. This is serious money. The job market has moved into the 21st century, but Germany’s means of tracking workers is still stuck in the previous century.

We may soon see driverless long-distance trucks plying the autobahns. Last April, a convoy of driverless trucks drove across Europe to the port of Rotterdam. Yet here again we have to stop for a moment and think a bit more deeply:  approximately 2 million Americans are truck drivers — it’s the single biggest occupation for US males – and over 600,000 Germans drive trucks for a living. All of their jobs are suddenly on the chopping block. Is it really a good idea to unleash a technology that will wipe out millions of these jobs? Sure, the trucking companies will benefit by reducing their labor costs — but will society benefit? The skills of a truck driver are not exactly transferable to other occupations.

This type of precariat-fueled company is gaining a foothold not only in the startup economy, but increasingly in industries of the traditional economy, such as in the auto industry, drug and pharmaceutical companies, universities, transportation, newspapers and broadcasting, arts and entertainment and more. The rapid implementation of software and automation has resulted in an increase in the use of contractors, freelancers, temps and part-timers in virtually all industries and occupations. Despite a recent increase in Germany’s employment rate, the percentage of permanent, full-time jobs has declined by 10% since 2000, a greater rate of decline than in the UK and France. Meanwhile, the number of part-time jobs has increased nearly a quarter to 27% of all jobs. The number of temp workers and self-employed Germans also has increased, and the number of people working a second job has doubled in the last 10 years. Much of this is due to technological transformation of the economy. No wonder tech pioneer and venture capitalist Marc Andreessen says, “Software is eating the world.”

So according to even the inventors of these technologies, the creation of jobs is no longer assured. Indeed, so concerned are some of the tech inventors that their inventions will not in fact be a net job creator that many of them have begun calling for policy interventions like the creation of a “universal basic income.” A UBI would provide a floor of minimum income that a jobless economy no longer provides. It would be hugely expensive and therefore a political long shot, but it is a convenient bone tossed to a worried public to deflect criticism of the techies’ job-destroying business model. Once again, like in the aftermath of the global economic collapse in 2008, the private sector is advocating that the public sector pay for private sector shortcomings, in effect privatizing the gains and socializing the losses. A UBI would help the economy somewhat by at least maintaining consumer spending at a minimal level, but even if politically viable I fear it would end up resembling the old Soviet system: “We pretend to work, and they pretend to pay us.”]

That leaves us with the realization that this may be the first era in recent human history in which technology will lead to fewer jobs, and in which the “creative destruction” of technology and innovation will be more destructive than it is creative. Yet we are walking blindly into it, without even questioning whether we should strictly regulate these technologies, much less put a stop to them.

Many forks in the road

The future is arriving faster than we realize. The discussion over the future of jobs is just the beginning of many key decisions that will have to be made over the next two decades which will result in a major civilizational shift. What might this transformation look like, in the not-too-distant future?

I caught a glimpse of it when I attended a tech conference in San Francisco. I sat spellbound while I listened to Padmasree Warrior, the chief technology officer for Cisco, one of the top technology infrastructure companies in the world, look into her crystal ball and reveal the future to her entranced audience.

“The future will be about sensors and the Internet of Things, and how they will start influencing what we do,” she said. By Internet of Things she means a far-flung digital network in which our homes, our businesses, our communities, our lives, will all be deeply interlinked via billions of sensors scattered all over the world, in constant transmission with each other, a throbbing hive of digital “neurons” that form the backbone of a global central nervous system known as The Cloud.

“Technology will become an extension of who we are as human beings,” she says. “We’ll wear a lot more of the technology. We’ll probably inject a lot of sensors that will keep track of what’s happening in our bodies, so it can be much more predictive.”

Inject? Did she really say inject, I thought, sitting there, in a sea of techno sapiens, who were dutifully typing her every word into their laptops. I had a surreal, out-of-body experience, wondering who is this “we” that is going to inject nano-bots and somatic sensors into “our” bodies? I’m not even sure I can trust my computer and iPhone anymore—every time we turn these devices on, apparently, we are being tracked and spied on by advertisers, employers, corporations, the government and the Russians, in a way the Stasi could never have imagined. But at least we can turn off our computers and devices and leave them home. Now they want to inject a nano-bot computer into our bodies? Shouldn’t there be, like, a public referendum about this?

Noted futurist Jeremy Rifkin has upped the ante even further. In his most recent book, The Zero Marginal Cost Society, he describes in great detail this Internet of Things that will propel “the meteoric rise of a global collaborative commons and the eclipse of capitalism.”  All of these sensors, both within our bodies and in the external world, will feed a constant stream of Big Data into The Cloud. Rifkin champions a decentralized and deeply networked population of “prosumers”—consumers who have also become their own producers of renewable energy, 3D-printed products, online education and training courses, news, culture/entertainment, and co-share cars, homes, clothes and tools on the collaborative commons. His vision of total interdependence promises perfection in great and small ways—a milk carton or carton of eggs, for example, will sit on a “smart” refrigerator shelf in your “smart” home, sending signals to the grocery store when it is nearly empty, automatically requesting delivery of a refill—no doubt from a driverless car or Amazon drone. It taps into many of the right cultural memes about the promise of sustainability, liberation from work, and yes, efficiency via the most beautiful technology, and of producers producing “from each according to his ability, to each according to his need,” as Karl Marx once famously wrote. Marx meets the Jetsons, in the Land of the Techno Sapiens.

But Rifkin, Warrior and other visionaries don’t ever address a looming question: who will control this collaborative digital network? They seem to assume that in this new Edenic world the technologies will have a power and momentum all their own that will iron out any inequalities. But back here on planet Earth, without a clear blueprint of what kind of public policy needs to be legislated that would solidify the wobbly ground beneath everyday workers, there’s little reason to assume that these trends will automatically translate into a bright future for the vast majority. The recent track record of our political institutions’ failure to reign in a powerful economic elite is not encouraging.

Despite these rather obvious political realities, I have discovered that if you question the rise and momentum of the digital technologies, some will label you a Luddite or a Ted Kaczynski-Unabomber-wanna-be. The cultural spell which believes in technology’s inherent goodness is so powerful that to express doubts or concerns is to be branded an extremist.

But consider this: in Silicon Valley, there is much serious talk about what is called the “Technological Singularity” —a future period predicted to occur around 2045 when the artificial intelligence explosion will result in a merger between humans and machines. At this point, machines will achieve true intelligence and even surpass their human inventors as they design ever smarter versions of themselves. This moment is predicted to launch a runaway effect that will radically alter civilization in an event called the “singularity.” Foremost among the singularity gurus are people like tech legend Ray Kurzweil, who was a pioneer of voice recognition technology and now is the Director of Engineering for Google, where he heads up a team that is developing “machine intelligence.”  What does Kurzweil see in his crystal ball?

He predicts that by the 2030s, injected nanobots will plug our brains straight into The Cloud, where all digital info will be increasingly siloed; and that by 2045, the computational power of artificial intelligence will be a billion times that of human intelligence. Other experts and visionaries say that by that date humans and machines will have begun to merge into a new civilizational species, homo robotucus. It sounds too fantastic, like the stuff of science fiction, except that some of the world’s most powerful companies are betting an amount equal to the GDP of mid-sized nations in order to make this an eventuality. All of this is predicted to occur during our lifetimes.

But what if we don’t want to merge with robots and software? Shouldn’t we have a choice? Personally, I don’t think I want my flesh and blood to merge with injected nanobots, digital clouds, bionic body parts, virtual reality, or non-human intelligence algorithms – does that make me a Luddite? I can imagine that some people probably are looking forward to becoming part robot, but should I have to do it too?

The answer, apparently, is yes. Because in a competitive society, if a bunch of techno sapiens become humanoid-bots, they will have a competitive edge over me. They will be smarter, faster, stronger, much like a Blade Runner mutant. It’s like steroids in sports – once some athletes are doing it, and become a home run or Tour de France champion, then others soon think they have no choice but to inject the juice too. That’s why the sports world has banned this kind of biotechnology – while it produced a number of thrilling athletic feats, it was contrary to some other important values about fairness, athletic integrity and health.

So am I a Luddite if I don’t want to merge with machines? Isn’t this merging a form of “nonconsensual sex,” a kind of high tech, against-my-will coercion? Who is the extremist here?

Deca-millenniums ago, homo sapiens out-competed Neanderthals because our species benefited from certain anatomical adaptations that resulted in enhanced brain power and organizational abilities, precipitating Neanderthals’ extinction. Will homo sapiens go the same route, in competition with homo robotucus?

Making the future safe

What can we do to better ensure that we receive the benefits of technology without its ills? Here are several to think about. One obvious change is that Germany and the US must update their methodologies for tracking how people are working today, and how corporations are employing them, to ensure that clickworkers and others are treated fairly, and that our welfare systems are not underfunded by businesses and workers who evade paying taxes.

In fact, data is quickly becoming a main currency of the digital age. An impending battle is looming over who will control the oceans of data that we are swimming in. The growth of it is exponential:  90% of the data now circulating on the internet was created only two years ago. Whether tracking how people are working today, or tracking the commercial activities of digital companies and their vast army of anonymous contractors, or reining in the abuses of company surveillance of employees and misuse of employee data, or cracking down on corporate misuse of our personal data as we use the Internet, new policies that reign in the misuse and abuse of data in the digital age are badly needed.

A second change would be to require “technology impact assessments” by independent experts who are consulted prior to the introduction of new technologies to assess the impact on the quantity and quality of jobs. By law, works councils already have some input over technologies that could affect the performance and behavior of workers, yet oftentimes the councils do not have the expertise or information they need to evaluate the latest technologies, nor the financial resources or clear legal right to hire outside experts. This area of German labor law needs to be made much more robust.

A third change would be to create “equal jobs for all” based on a portable and universal safety net for all German workers. Currently, self-employed freelancers are treated like second-class workers. Businesses increasingly hire them since these workers have few labor protections and the business does not have to pay for healthcare or other safety net benefits. That in turn has undermined the “good jobs” economy. What if we boldly imagined a world in which “all jobs are created equal”? Why should the lowest worker be treated less fairly than the CEO of Volkswagen or the chancellor of Germany? A portable, universal safety net for all workers would remove the incentive that businesses currently have to hire these vulnerable workers instead of full-time, permanent employees. And that would provide a more stable foundation for the workforce of the future.

One can easily look away and pretend that the future is not arriving, but that won’t stop it. Famed physicist Stephen Hawking, Microsoft founder Bill Gates, Tesla Motors founder Elon Musk and dozens of other top scientists and technologists have signed a public letter warning about the development of artificial intelligence, with Musk uttering his famous phrase that “we are summoning the demon.” Like with atomic weapons, they foresee a threat to humanity coming from a technology created by humanity itself. It’s like that riveting M.C. Escher print of two hands drawing each other, except now one hand is erasing its counterpart.

We are entering a science fiction movie that is bound to get more interesting, yet it’s one in which you will not be allowed to leave the theater. The digital age demands the equivalent of the medical profession’s 2400-year old Hippocratic oath that swears, “Primum non nocere” — “first, do no harm.” No technology is inevitable, it is the outcome of political decisions and cultural memes; humans have the capacity to determine the future of our society. But if we don’t ask the right questions now, we may not like the answers that will be appearing all around us over the next 20 years. The algorithms are coming, they are slowly encircling us. If we do nothing, there may be no escaping them. “Primum non nocere

 

 

Photo by Poster Boy NYC

1 Comment Die Zeit: “You’re fired!”

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.