The case against iPad and Apple as a temporary fluke

Via the Rationalitate blog:

Tim Lee has an interesting analysis of the shortcomings of Apple’s iPad, but at the end he makes what I believe is a very prescient, more general point about the future of intellectual property and digital media:

“This is of a piece with the rest of Apple’s media strategy. Apple seems determined to replicate the 20th century business model of paying for copies of content in an age where those copies have a marginal cost of zero. Analysts often point to the strategy as a success, but I think this is a misreading of the last decade. The parts of the iTunes store that have had the most success—music and apps—are tied to devices that are strong products in their own right. Recall that the iPod was introduced 18 months before the iTunes Store, and that the iPhone had no app store for its first year. In contrast, the Apple TV, which is basically limited to only playing content purchased from the iTunes Store, has been a conspicuous failure. People don’t buy iPods and iPhones in order to use the iTunes store. They buy from the iTunes store because it’s an easy way to get stuff onto their iPods and iPhones.

Apple is fighting against powerful and fundamental economic forces. In the short term, Apple’s technological and industrial design prowess can help to prop up dying business models. But before too long, the force of economic gravity will push the price of content down to its marginal cost of zero. And when it does, the walls of Apple’s garden will feel a lot more confining. If “tablets” are the future, which is far from clear, I’d rather wait for a device that gives me full freedom to run the applications and display the content of my choice.”

Lots of interesting discussion in the comment following Tim Lee’s article.

Andy Robinson echoes the points about the dangers of using such freedom-limiting technologies:

“This isn’t just about whether intellectual property is abstractly a good or bad thing or whether the overall outcome of DRM is beneficial or not. It’s about dangers of control built into the relationship between users and technology. The argument, “it saved the music industry” could as well be applied to mass executions of file-sharers or their deportation to Guantanamo Bay. People in favour of some generic prohibition often become absolutely fanatical about keeping it in place – look for instance at what has happened with the drugs war. Since limits to power will often make it impossible to enforce whatever prohibitions one happens to think justified, limits have to be drawn on how far one is prepared to go to enforce them – otherwise every case of a difficult or impossible to enforce rule will be an occasion for slippage into totalitarianism. The result is a need to distinguish a general frame of restrictions within which social goals can be pursued from the desirability of the goals themselves, and to prioritise the former over the latter. There are limits in the price which can be conceded for any particular enforcement advantage. Maintaining basic rights and a proper balance of power, preventing the powerful from becoming too strong, is a higher-order issue over the production of desirable aggregative social effects.

The ground zero of this is that the technology in question is wrong. And it’s creepy, and it’s creepy because it’s wrong.

Technology which can be remotely told what to do without permission from its user is scary.

The fact that people using DRM-enabled systems have had files deleted without their consent is scary.

The reason it is scary is twofold. Firstly, it has ceased to be a tool in the hands of the user, and has become an agent of a foreign will. And secondly, it gives far too much power to the people who control the updates. In short, it violates the right of the user to be in control of the tool they use. (I’m drawing here on Illich’s use of the term ‘tool’). This affects the balance of power in the social field in general. It’s FAR bigger than the question of whether people should download music for free. It’s about whether people are to be free or enslaved.

This is because the principle which justifies control in this case, would also justify control in a million other cases – like the hammer (isn’t it worth it if a few murders are prevented, or X amount of criminal damage?)

Of course, once in place the tehcnologies of control will be used for purposes other than those originally intended. Some song causes political controversy (such as the outcry over ‘Cop Killer’, or the Marilyn Manson/Rammstein hysteria after Columbine) and the company could be pressured to delete entirely legal copies for political reasons. They could be ordered to turn over records of who was listening to a song later deemed to correlate with some kind of criminality (like the Patriot Act library dragnets). An artist or company could decide to withdraw their work (as with Lucas’s stance on the Holiday Special, and Kubrick’s withdrawal of A Clockwork Orange) and they could pull every copy. This might happen if a studio was in dispute with an artist – suddenly the artist would disappear. And what if Apple ended up in a dispute with a studio? Again – the entire catalogue of the studio could disappear (either because Apple withdrew it as an act in the dispute, or the studio demanded such a measure in court). This is before we even get onto what the likes of China and Iran could do with this technology. It would be far better that the technological capacity to do such things were to be prevented from being actualised or normalised.

A good tool is something for use, like a hammer – not something which has particular uses built into it. Pretty much all tools can be put to legal and illegal uses, or to harmful and harmless uses. This can’t justify the project of working controls on how tools are used into the tools themselves. Would you really want a hammer that would decide what you want to hit, distinguish corporate-sanctioned uses from non-corporate-sanctioned uses, and amend its rules on what it could hit without your being able to veto it, in response to its maker’s commands? Wouldn’t a hammer which did that be rather creepy? Wouldn’t you rather have a regular hammer?

Basically, we don’t need technologies making our decisions for us, distinguishing what uses they want to allow. It gives far too much power to the tools, and therefore, to whoever is sending the long-range responses. Things have already gone too far in this direction with mobile phones. There’s no good reason why calls aren’t encrypted, why phones phone in their location from afar, or why they can be turned on from a distance, or why sim cards can be disabled from afar. It all makes political abuse so much easier, and usefulness so much less. The only reason it’s been allowed is that it was sneaked in along with the technology when it was introduced, and then normalised (and in some cases legislated) once it was already established. Eventually no doubt, somebody will design mobile phones based on a distributed model which do just function as tools rather than surveillance devices, which don’t generate records of where you are or who you’re talking to. And then all hell will break loose because of their greater functionality.

It’s the same problem with iPods and these new things – the convenience of the technology, and its monopolisation for a short period by a few companies, and the relative invisibility of its inbuilt constraints, outweigh the negative impact on functionality. They aren’t playing on widespread support for DRM, they aren’t playing on greater competitiveness of less-functional technologies. They’re relying on market dominance to generate enough convenience to outweigh the obvious disadvantages to the end-user.

Unless a vast control-regime is established to keep in place this order of affairs, it will end up being a temporary advantage. If all of this goes too far, if reduced functionality becomes a serious inconvenience or if abuse becomes too visible and creeps people out, we will see people deserting the new technologies for older ones where they at least know where they stand. A radio may have less functionality than an ipod but it won’t tell anyone what you’re listening to. But people wouldn’t have to go that far. They’d just have to opt for older systems with greater functionality. Notice how Microsoft have basically been forced to revive Windows, because people preferred XP to Vista in spite of the latter’s add-ons.”

1 Comment The case against iPad and Apple as a temporary fluke

  1. AvatarMichel Bauwens

    Commentary by Eric Hunting, via email:

    Apple has, ironically, always been something of a throwback in terms of its hermetic design/development methodology. It’s rather like those companies that manufacture high performance sportscars or ‘supercars’, producing these cutting-edge machines yet their development and manufacturing process goes right back to the era of ‘coachworks’ at the start of the 20th century. Apple still makes personal computers like its the early 1980s. While most of the industry is into design by mostly off-the-shelf systems integration, Apple will push the envelope in terms of having exclusive components developed to fit their trend/standard-divergent designs because it assumes it leads everyone else as to where the evolution of personal computing is going. Usually, it’s right. Sometimes it’s way off the mark because, in it’s hubristic assumption that it leads everyone, it doesn’t always pay attention to what’s going on in the world outside the Cupertino cloisters.

    The original software openness of Apple was based on convenience. At the early part of the personal computer revolution, software development lagged terribly behind hardware development -it still does- and the relatively small companies breaking into this field had limited resources to spend on application development. It was in their interests to turn as many end-users into developers as possible in order to cultivate a market for their products by cultivating a pool of third party applications that would give them an edge over less open companies like IBM that had traditionally ‘owned’ all their key applications and had no hope of competitively diversifying their application family single-handedly. Back in the 80s, it was as if Apple -or at least its exceptionally enthusiastic and idealist user culture- had practically invented the concept of freeware. But as the personal computer industry matured and software development matured into an industry dominated by big companies it became progressively harder to be an independent developer. Across the industry, software tools became increasingly expensive and complex and access to increasingly exclusive APIs more costly and restrictive. This is not a recent phenomenon nor in any way exclusive to Apple. It’s common for many mature industries -rooted in the compulsion to monopoly- to seek to lock-up knowledge and suppress the emergence of potential competitors -even if the inevitable result of this is Detroit. Profit doesn’t need progress. It just needs market share. In a mature market, innovation is only a tool for stealing market share from others -but suppressing innovation to maintain market share is an easier strategy for the large company. But is that really what the progressive locking-up of Apple products is about?

    Apple has long acted like it -or the Mac- was in competition with the PC platform just like in the olden days of the personal computer -which is sort of like how North Koreans think they are at war with Capitalism when Capitalism couldn’t care less. But in recent years, as the company has shifted to increasingly blobject-like personal computer products that have no standardized architecture even across their own product lines, it seems as if a different computing paradigm has been emerging in the company. There is no definitive Mac hardware platform anymore. There is only Mac OS; a software/user interface platform. And now there’s an iPhone OS platform in some ways in direct competition with the Mac OS. There’s a different logic from that of the past underlying all this and it wasn’t quite clear to me what this was until the iPad appeared. That design seems to have condensed this paradigm into a single physical form. What the design of the iPad seems to be saying is that hardware platforms don’t matter anymore because the Internet is now the dominant overriding platform. Personal computers are redundant. We need only appliances that provide a front-end for the Internet. So it doesn’t matter that we lock up the hardware or even the system environment because it’s all about the content on-line and and user experience we craft with our means of delivering it -through appliances. Now, thanks to their hermetic development culture, it looks like Apple may be taking this thinking too far too soon for a computing culture that’s still talking about OSes like they matter. But here’s the key point; the iPad is NOT a personal computer. Any comparison of it to a personal computer -like all the knee-jerk comparisons to netbooks- makes no sense. What it is is very plainly stated in its name ‘i’ ‘Pad’ -at least for those who know their computer science. A Personal Access Device for the Internet. The PAD concept originates in the realm of Ubiquitous Computing which is based on the notion that a personal computer is no specific collection of hardware but rather exists as a ‘personal domain’ on a network which we link to through PADs wherever there’s some kind of network connectivity.

    This parallels a concept called the Distributed Computer that I’ve been talking about for, at least, the past decade and a half. (and which can been seen in more detail here; http://tmp2.wikia.com/wiki/Aquarian_Digital_Infrastructure) I have long predicted that the future of the personal computer would be that it would, in terms of hardware, break apart and become a free-form collection of self-contained network appliances of largely generic interoperability and that what we now think of as a personal computer would become instead a personal domain existing distributed across these devices and across the collective network. It would be as if you replaced motherboards with the Internet and, instead of the digital Swiss Army Knives that we call computers today, the computer hardware we own would be reduced to a collection of specialized devices; storage servers, processing servers, network interface units, and PADs in a vast assortment of forms (from worn devices like BlueTooth headsets, to laptop-like devices, to tablets in an endless number of sizes from cell-phone-like to TV-like, to robots and toys) all specialized for different ergonomic modes of use and different ranges of applications. Some PADs might be more PC-like -more like the digital Swiss Army Knives with more built-in self-contained capability and less specialization in ergonomics in order to accommodate situations of poor connectivity, like when you’re hiking in the woods out of reach of WiFi. But the more the net infrastructure spreads the less that is necessary and in the home these devices would be taking on very specific roles like a TV, a reading or drawing tablet, a writing workstation, a pocket PDA (phone, control panel, simple touch display), a touch-control panel on a fabber, or a CAVE (Cave Augmented Virtual Environment) immersive entertainment room. This specialization at the front-end becomes less difficult the more generic the architecture on the back-end and the more commodity-like that hardware becomes.

    In such a computing environment, the openness of individual pieces of hardware matters less than the degree of interoperability they all share under a network platform and common user interface metaphor. But the question is, can you really achieve the latter without the former? And do we, at present, actually have the technology of interoperability on-line that this calls for? In design the iPad is PAD-like. But it’s still highly-dependent upon internal resources and on its means to function even when there’s no net connections. It’s still very much the Swiss Army Knife. There’s nothing else in the Apple product family that corresponds to the other hardware elements of the rest of the Distributed Computer -though the virtually defunct Mac Mini is the logical form-factor for that kind of hardware. (they have recently introduced a Mac Mini configured as a DVD-less dedicated Snow Leopard server -that’s very close to a generic processor unit) So it’s sort of like Apple is assuming that, for the time-being and for the apparent majority of people who’s computing needs aren’t all that sophisticated, the Internet as a whole is robust enough to function like a cloud computing environment without a specific cloud platform and the role of the PAD can be focused on, essentially, that of a hardware browser. Is this is a viable assumption? We’ll have to see. There seems to be a lot more to this design experiment than most people are cognizant of.

    Eric Hunting
    [email protected]

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.