Joe Geldart on the future of the (active) web

What comes next?

Joe Geldart has an extensive interpretation of the history and future of the web, which I found very appealing as a non-technical observer, but which our associate Henrik Igo criticizes on technical grounds. We recommend reading the whole article, of which we present a summary here, and tomorrow, we will present Henrik’s commentary.

For background, Geldart distinguishes an evolution in three steps, from the original Document Web, via the current Data Web, but towards the coming Active Web.

Joe summarizes the past evolution as follows:

Both the Document Web and the Data Web rely upon a very simple idea; that of providing names to things. In the Document Web, these names are locators; they tell you where to find documents so that you can download them. Appropriately, these names are called Universal Resource Locators (URLs). The Data Web goes a step further and provides the ability to name things you can’t download such as the book you just read, the idea you just had and the action you’re about to do. These names are called Universal Resource Identifiers (URIs) and subsume URLs.”

What then are we moving to?

The current models of the Web are very passive and static things. By contrast, humans are active and dynamic. All the information which is on the Web is a product of human action in some form. Whether it be written by hand, or the result of a human-conceived piece of software which automates a task, there is nothing available in the medium which hasn’t been touched human thought in some fashion. Now, human conception isn’t a static thing; we aren’t born with everything we are ever going to know. We learn, we adapt, we make mistakes in our beliefs which we then correct. All this happens instinctively, and without effort.

It should be clear from this that the assimilation of information, its understanding and subsequent dissemination, can be seen as a form of process. We learn by acting. We communicate by acting. Our use of the Web is just a particular form of action, allowing us to find parcelled snippets of another’s thoughts. As the disagreement problem shows, there is no inherent semantics to the information on the Web, just the meanings we acquire through our readings.

This then is what I propose as the next phase of the Web; accepting the fundamental rôle that process plays. This entails a number of important changes in perspective. Rather than treating information on the Web as having meaning in and of itself, it only gains its meaning through its users (be they machine or human). Equally, the openness of the Web entails that we accept disagreement and provide mechanisms to deal with it which don’t require us to discard so much information. Rather than a fundamental distinction between producers (the servers) and consumers (the clients), we should treat production as merely as one outcome of the consumption process. I propose then a Web of equal-agents-before-God, with no a priori distinctions between them, communicating on the same footing. This proposal I term the Active Web.

And this is why the transition is important:

This view of the Web, as a collection of discrete and immutable documents is what I term the Document Web.

The logical next step is to dissolve the boundaries between documents and provide meaning to the structure of the contents. This is the key idea behind the Giant Global Graph. This has been the focus of two mildly-competing efforts. One, XML, tries to make document representation mechanical and so maximise the reuse of tools. The other, RDF, tries to provide a formal data-model for the Web. I say mildly competing because there is a difference is direction between the two projects, and some overlap in goals. XML starts with the premise that the ‘Document is King’ and represents all data in the form of a tree of structured parts. RDF, starts from the idea of the Web, that ‘Connection is King’ and works backwards to specific item representations from there. In light of these very different starting points, we should wonder not why RDF-XML was so bad, but rather that it happened at all. Looking at the Semantic Web project, we can see a large portion of the idea is to liberate data from stifling documents only manipulated as a whole and allow the Web idea to operate right down to the level of a binary assertion. This densely inter-linked network of assertions is what I term the Data Web and represents a true second phase for the Web project.

The purpose of the Web is to allow us, its users, to share information and understanding (granted, given the perceived quality of the majority of the content, this may seem like a lofty and idealistic purpose but more on that later.) The Document Web allows us to post pages detailing the most tedious minutiae of a topic and link them in with other pages so that people can find them and share in our ennui. The Data Web allows us to tear down the artificial silos that divide our knowledge and benefit from emergence; the whole is more than the simple sum of its parts. It seems debatable whether it is possible to go further than that, however it is worth noting that the Data Web is very well-suited to blind aggregation, but questionable in its approach to human-oriented knowledge.”

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.