Henrik Igo’s critique of the (Geldart) Active Web proposal

Henrik Igo reacts to our presentation of the proposal of Joe Geldart towards an Active Web. The intervention of Henrik is rather technical, but still of interest for non-developers.

Henrik Igo:

The Document web definition is fine, it is what anyone would consider “Web 1.0”. What I strongly disagree with is the authors criticism or belittlement of current “Web 2.0“. In my opinion a significant shift in the web happened with the maturation of the Firefox browser, which released an avalanche of web based applications and portals that made heavy use of JavaScript and CSS. (If someone wouldn’t like the term “Web2.0” it may be better and clearer to call this “The advent of AJAX”.)

Before Firefox there where 2 browsers, Internet Explorer and Netscape that supported advanced JavaScript, but they supported totally different versions of it (the standardised version today is the IE one, a testament to the fact that MS indeed employs some very good programmers, the ones that happened to work on IE from 4.x to 6.x before 2001). Therefore most pages that tried to do anything with JavaScript or advanced CSS supported only one of these browsers, or sometimes tried to support both of them, often with poor results. And many in the universities or Open Source crowds for instance were still using text-based browsers – which is notable because at the time this group had significant mindshire in the web’s development. For all of these reasons use of JavaScript was considered evil by (in my opinion) a majority of web developers and what was then called “Dynamic HTML” was mostly a phenomenon of the Microsoft camp. (Even today if you use the web interface to Microsoft Exchange email, it is very nice on IE but barely usable on Firefox.)

With the advent of Firefox – which supported the then standardised IE style of JavaScript – the situation started changing, since there now was a standard, and a free multiplatform browser to support the standard. Quite soon very cool web based apps were born, led by Google maps, Google mail… This was called AJAX programming, as in Asynchronous JavaScript and XML. Compared to Microsofts DHTML evangelisation this was much cooler technology than anyone had ever dreamt of, and availability of an Open Source browser to support it made also the opposition vanish. So imho this, and not IE4.x with DHTML support was the de fact next phase of the web.

At the same time we had developed some additional techniques – most signicant would prehaps be RSS and the family of XML markups used to provide blog feeds. This lead to a collaboration between websites beyond linking: You could provide parts of another blog or newssite on your own page, for instance. Or to take a very different example, BookMooch uses Amazon to provide data and cataloguing of books. Yet, BookMooch is a site for free sharing of old books, you’d think Amazon wouldn’t like “helping out” such a project. Not so, in reality lots of BookMooch users end up buying books on Amazon. In fact, BookMooch probably makes most of its income based on money they get from Amazon for these referrals.

AJAX combined with RSS and some other by then standard tools (wiki is a significant one) is in my opinion rightly called Web2.0. This is very different from the original document based web and rightly has been given its own name.

Web2.0 is NOT the social web (like FaceBook, LinkedIn). The social web is merely an application of Web2.0, technically it doesn’t contribute anything new. (Well, apart from FaceBooks innovation of letting 3rd parties develop applications embedded in its own site, that is a great innovation, but it is not “THE social web”.) Why the social web is so much hyped is in this context in fact a good question, I believe there is in fact a little pyramid scheme to it all. I mean Facebook is fun and all, but it isn’t THAT fun, I think the effective inviting mechanism plays a part.

This is the point we are now. Now for my own predictions:

Next we will see the advent of the Single sign-on web, most likely emodied in the form of OpenID. (SSO means you don’t have to create new logins for every site, you just use one main identity and password to log in to each site. Obviously the sites you log in to don’t get to know your password, they just accept the referral from your ISP, mail provider, or other OpenID provider you are using.) This imho will add further granularity to the web, in that users can come and go more fluidly than today, where you make a choice to register and join FaceBook but not something else. This in turn should foster a development where we can again have smaller sites providing one small funny little piece of the social web, instead of the monolithic FaceBooks of today. This would be in line with what Web2.0 was all about, Facebook et al are in fact a countertrend to the Web2.0 trend if seen in this light.

Whether a “decentralised social web” will arise from this is a good question, and whether the Global Giant Graph will emerge from that is an even better question. It might, but it might end up something entirely different. The GGG is technically possible today, and how OpenID works there are some similarities to the RDF used in GGG, so once OpenID becomes popular, the next step might be to not just externalise (or decentralise) your login credentials but also your social connections. But we will know the answer to this in something like 5 years.

The proposal in the end on new HTTP commands is just pure folly (it is just the wrong place to do it, period), which underlines that the author wasn’t just slightly off with his Web2.0 comments, but in fact knows nothing at all about the technology he is talking about. To implement such functionality by extending HTTP would imho be quite silly, and in fact a peer-to-peer protocol like SIP would probably be a better starting point in the first place, and even then you wouldn’t do it by commands like those, but you’d develop an XML based document language to transmit this kind of information.”

Michel Bauwens comment: Henrik, could it be that the Active Web proposal could have merit, without being tied to a specific technical proposal on how to implement it? It seems that your critique is focused on the latter mostly.

2 Comments Henrik Igo’s critique of the (Geldart) Active Web proposal

  1. AvatarMichel Bauwens

    Athina Karatzogianni:

    “I think this piece makes a serious point in relation to our understanding of knowledge and how our philosophy of human-to-human and human-computer interaction influences knowledge and particularly ‘universal’ knowledge (a truly bizarre utopian enlightment concept), as well as the uncertainties created by ‘globalization’ and technocracy.

    The article deserves attention because it warns against relying on ‘boolean’ logics in IT terms, I would argue in relying also in ‘binary’ terms in the political sense (although that is more relevant on the point of allowing for interaction, disagreement and happily conflict, as they produce the most interesting results).

    What has been done ‘technically’ with Web 2.0 is not enough and fundamentally the architecture is still the same relying on certain logics of ‘universal’ truths, fact triangulation and customer-client relationships, instead of networking and building on each other and why not, even producing ‘biased’ knowledge. The reason the Web 2.0 is not that ‘fun’ is because it is impersonal (or often too personal!), alienating to the computer illiterate, and not catering for exciting interactions for those that are IT literate.

    To put it simply, some aspects are too centralized (control of platforms, software, e-commerce etc), while others are too scattered and lost to the few more well known blogs and webpages. The architecture is not enabling, because it was devised for different purposes.

    All the great efforts and amendments to that will always fall short. It is like a house constantly changing builders, architects and engineers, I see it difficult though, however good these people are to have better cyberspaces, unless the foundations are looked at, and not only in technical terms.

    The philosophy part this author advances is in my opinion spot on.”

  2. AvatarHenrik Ingo

    Michel, I have to admit I didn’t even read all of the Active Web part of the article because it didn’t make sense. As I see it, the Web is becoming active and the author just fails to see it because he knows too little of what he is writing about so he is looking for it in the wrong place. (I’m sorry, I’m not usually this harsh on anyone in public, but since you are asking.)

    In my opinion:

    For instance, to continue using the HTTP protocol as an example, the
    purpose of it is to locate a document and deliver it to you. There is
    no room for “fuzzy logic” here, nobody would want to use a web where
    you get a page that is ALMOST the one you requested.

    The fuzzy non-binary part is contained within the documents that are
    delivered, and I argue that to some degree we already have made great
    advances to a more colored and less black and white reality. Instead
    of reading THE political truth from the one major Finnish newspaper,
    you can read a lot of blogs on the net with DIFFERENT viewpoints. You
    can go to Wikipedia to read an article that is ALMOST factually
    correct. etc…

    Furthermore, there is computer science that deals with non-binary
    logic, in particular Artificial Intelligence practices like Fuzzy
    Logic or Neural Networks (which try to emulate how we know our brains
    work, really interesting). While this is much more difficult than
    “normal” programming, and therefore these techniques are not as
    prevalent building blocks of the Internet today, as the “binary based”
    technologies are, we do use them today:
    – Spam filters are often based on Bayesian filters or some other NN techniques
    – Google tries to find you pages that you are most likely going to be
    interested in (as opposed to pages whose content most exactly match
    the keywords you search for)
    – Amazon will send you advertisements on books that they think you
    might be interested in, etc…

    Also to Athina’s comment above, I see Web2.0 being exactly like this. The blogosphere is not a customer-client relationship, it is a “networking and building on each other” phenomenon.

    henrik

Leave A Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.