Still waiting for hypertext

It is amazing how quickly a month can go by! I have been busy in pretty much every regard, which is great but also tiresome. I have been doing a fair amount of reading and have good variety of concepts to comment on. I will do my best to get through as many as possible, but may have to break it up a bit or come back to things for greater detail in another post.

To start with there is the interesting work of Ted Nelson, the first theorist to define the terms hypertext and hypermedia. Ted Nelson (this is a different bio page) has made several efforts to influence how we access the mass information available today with marginal success. There is project Xanadu, originally founded in 1960 to bring the concept of true hypertext to reality.

I say true concept because hypertext as we know it today in regards to the Internet and HTML, is very much a misrepresentation of the idea. Remarkable as it is in its own right, today’s hypertext is really just the most simple and feasible means for bringing the Internet to life. Yet, after 40 years of advances in computing capabilities, the level it is at today means that a true hypertext source is easily within reach.

The demand for hypertext sources is also greater than ever. As information continues to grow in volume and complexity, the challenge of organizing it into useful forms will also grow. By transitioning to hypertext and hypermedia, we will be able to build a continually self organizing body of information that is completely interlinked. In whatever form the Global Mind comes about, I believe that the engine behind it will most definitely be based on a new generation of hypertext concepts.

The most remarkable part of looking at Ted Nelson’s ideas is that they stand in complete contrast to almost every recent technological development I have written about in this Blog. The Semantic Web, XML, and RSS are, in his terms, only further distortions of the hypertext concept. What is his assertion? Well you may want to read for yourself, but I believe my interpretation is reasonably accurate (at least in the abstract sense).

The problem with electronic media as we know it today is that it is simply a recreation of the material world within computers. Information is stored and accessed in a manner like file cabinets and bookshelves. We open “pages” that are all addressed and linked to a root hierarchy like those bound in a book. What is such an awful waste of the power of the computational world is the fact that the linking is one way. While a root structure is necessary in the real world, it is relatively meaningless to the virtual world.

For example, in the real world you can’t have a library that is simply stacks and stacks of paper with no particular order or location. You have to put the information in order on pages, in books, on shelves, in sections, etc. Yet in the virtual world, this boundary can be relaxed if you make the linking both ways. What does that mean? Well the way I see it is that with everything linked both ways you essentially have a network of continually growing loops. If everything is connected to everything through these loops, you can now pick up any page and then move to any other page in the source. No address needed!

This is definitely a difficult concept to visualize in terms of media, but think now of people instead. Most of us are familiar with the concept of 7 degrees of separation, but for those of you that aren’t it basically means that any two individuals in a connected society can be linked together through 7 or less people. An example might be me and Bill Clinton: I have a friend, whose uncle, whose neighbor, whose daughter’s college friend knows Bill Clinton. Yet if we looked at the hierarchy of our families we would not likely be connected till ancient times if at all.

Societies have two way links. We not only know our families (source) we also meet and form connections to other sources. While this abstract concept may now be more clear to me (and hopefully you), I still don’t quite see how it translates into the current world of computers. I will continue to read and maybe track down a working prototype of the concept. Something such as Ted Nelson’s ZigZag might be just that. It is hard to find anything that is definitely up to date. While he may be brilliant, he certainly seems disorganized!

If you come across any dead links, search for them on the Wayback Machine on the Internet Archive site. Another interesting project out there!

While this post is quickly growing out of control, I do want to post on something else so that everyone knows I haven’t gone off the deep end on the concepts proposed by Ted Nelson. I am a very cautious man. I enjoy the insights of a wide variety of views and models of the world we live in, but I have yet to be convinced that there will ever be a complete single basis.

Anyways, the site of interest is the Vivisimo Clustering Engine. It is basically an automated method for organizing search results with the intention of making the search more effective. For example I could search for my 1991 Honda Accord and be given the choice to look at results that are in the category of Parts, Sales, Reviews, etc. While the concept is not completely novel, the automation of it is more unique and interesting. The Yahoo Directory was probably the original example of Internet search categorization. While I don’t know how automated the process was for them, I do believe it required a fair amount of human involvement, i.e. site owners registering with it in the proper categories.

Alright, this post has grown monstrous and my mind is quickly fading. Things are finally starting to fall in place in my life so hopefully I will be able to get back to being more diligent and brief with my posts!

Comments

Popular Posts