Is the world wide web evolving, dying or merely pining for the fjords? A young web developer takes issue with Bill Thompsons call to dump the web. Eavesdrop on the techies slugging it out over HTML, distributed processing, < IMG > tags and illiterate waiters. The future of your desktop is at stake.
Relax, internet users, the World Wide Web is not dead, far from it. If you are worried by Bill Thompsons call to dump the web before it dies, dont be.
Heres why.
Bill starts off with some problems of Hypertext Markup Language (HTML) and Hypertext Transport Protocol (HTTP), and then talks about the (nonuse of?) distributed processing. Bill proposes that the browser-centred internet should be split up into various other specific clients for receiving specific types of information, something which some e-mail, music sharing and chat programs already use.
He is undoubtedly right to pull apart web inventor Tim Berners-Lees desire for simplicity, or at least his complaint about where that stateless simplicity could leave us is plausible. The solutions he describes, however, that of dumping the web in favour of an ill-defined distributed processing model, and the somewhat half-empty mentality that he employs to explain why the evident shortfalls of the current web means it is dead beyond fixing, is far from convincing to most active web software developers that I know. Despite the technical detail of his argument it shouldnt convince anyone else either.
Theres no denying that the web has a wide spectrum of problems, but that is the reason why it has, and will continue, to get better, not why it will die. After all, when it was invented, there were no images and a very small range of colours.
The statelessness of the web
The stateless problem outlined by Bill is definitely a serious design flaw of the web. To describe this problem he uses the metaphor of an amnesiac waiter who cannot remember anything and relies on tattoos for memory.
Because the relationship between client and server is anonymous every request every course has to be treated separately, because the server, our waiter, cant remember properly. In fact the problem is better articulated as one of an illiterate waiter while he cant write the order down he may use other, if less reliable, techniques such as facial recognition and memorisation, to recognise you and give you what you want.
It was definitely an oversight on Berners-Lees part to imagine that requests made to a web server should be anonymous, or to leave out any sort of identification system between the client and the server from the beginning of the world-wide web, but now the problems are not what they were, or what they are described to be by Thompson. Even the illiterate waiter in the newer metaphor is already reading easy novels like Harry Potter it was about ten years ago that he couldnt read at all.
Indeed, one of the biggest problems with Thompsons account of the web is how out of date it is. How is it, for example, that millions of people can check their e-mail on websites like Hotmail, Yahoo, and MSN, if the web today is still being held back by its stateless anonymous protocol? Many people even check their bank balances and prescriptions confidential personal information over the world-wide web.
HTML too has come a long way. < img > tags (elements in HTML responsible for specifying an image to be displayed) are no longer non-standard extensions and have a set standardised structure that is incorporated by all of the most modern standards for (X)HTML and other XML-based languages.
This statelessness of the web is also now being overcome, and Google Labs with their Google Suggest, have reached a new and impressive landmark in dynamic web that perfectly mimics a computers interface better than ever seen before in any web browser. Not only is it the perfect answer to the gripes about statelessness, but it demonstrates the innovation and creativity that has, and will, continue to keep the web alive.
Innovations and cookies
The problems of reliability with sessions and cookies, that Bill identifies the two best methods of identifying web users and solving the anonymity problem arise not from the essence of the web, but from two other sources: those paranoid people that turn cookies off because they think they will be spied on, and very old browsers. There are some less desirable cookies used to make sure you dont see the same set of banner ads twice, but other than that, there is no reason whatsoever to turn off cookie functionality in a browser. As for old browsers, its simple most modern browsers are freely available and its easy to get a new one. This applies to most new technology. My old Nokia 3210 cant take photos but it doesnt make mobile phones a dying communication method. I need to upgrade to get photos, and by and large the phone company makes it easy and cheap for me to do so.
It is interesting to compare the web to other inventions that have evolved a long way. In the 1960s driving a car was more of a primitive technological experience than it is now: There was no power steering, antilock braking system, air conditioning, catalytic converters. Engines were on the whole less efficient. Did that mean the car was doomed? No, and neither is the web, though it does still have various shortcomings that need to be invented around.
Another of those serious problems from a developers point of view, and touched on by Bill, is that the system for designing and laying out web pages HTML is non-standard. Here too, Bill has rightly raised a flag, but contradicts himself by lauding then lamenting it.
Actually, the relation of any developer to these standards is a difficult one to keep on good terms, especially the dictatorial way they are announced and the way most browsers still allow for violations of them to be made. But it is in embracing these standards that a stronger and more universal web will come into being and continue to evolve. In fact, there is a huge organisation devoted to designing and improving these standards the World Wide Web Consortium, known as W3C. Without W3C, the problems that Bill mentions would probably be a lot more serious, though they would still not signify the death of the web.
When I say evolve, I do not share Bills vision of separate programs for separate functions of the internet, thought that does exist up to a point and always has. The reason that chat, e-mail, and music sharing programs have come about is because of the general and inherent requirements of those sides of the internet chat is highly dynamic (though it too exists in web form. See AIM Express); e-mail, which is at least convenient to keep locally on your hard disk; and music sharing, also already happening over the web in any case. The logical direction is for programs to merge, not divide. Why use 15 programs when you can use one to do the same thing? People are already using the web enthusiastically for shopping, publishing, obtaining software and music, getting the news, dictionaries, encyclopaedias, and even sex.
The main restriction of the web, from a consumers point of view, is bandwidth, and this is one of the areas that has seen most consistent invention and improvement. The reason why you cant do even more with a web browser, such as receiving high resolution video and sound and other rich content, is only because of bandwidth and not because of amnesiac waiters or < img > tags, the latter which, I should again point out, was standardised by W3C in XHTML (the new HTML standard based on much more strict XML syntax) anyway.
The problems already hopefully seem far smaller, and the web seems far less dead than it did before, and that is before the open-source movement thousands of committed developers, many of whom work for very little or no pay has even been mentioned.
So the web is not bound in the way that it was before in 1994 or whenever Bill is casting his mind back to. The technology has changed and improved, and ultimately the problem solving creativity of web developers and engineers alike is easily capable of building many an eight-lane suspension bridge over any small ditch of a problem.