There is some commentary over on 43 Folders about how a lot of people are becoming huge XML consumers, and both simplifying and enriching their lives in the process. At least I think that’s roughly the point of the commentary. It’s a bit more abstract, sort of a “what are you doing with XML?” but I think the point is that people are starting to smell that there is a revolution on.
Late last month I was talking about some of these things on my old blog, which I’ve reproduced here for the time being, until I decide how to merge the content together. Some editing was done for brevity:
All that promise of XML, standards, divorced content and layout … well, it’s actually starting to happen. Browsers are now like cell phones; they all more or less use the same technology, to accomplish the same stuff, on the same sorts of infrastructure. Just like phones, the real fun these days is in what you can do with the devices themselves, when driven by the content and infrastructure.
The key realization I came to is that I don’t browse the web anymore. I don’t visit people’s web sites to check for new notes, new photos, new ideas, new thoughts. If I want to find particular information, google can provide it for me. If I want to catch up on anything in the news, I check my aggregator; I don’t waste my time browsing half a dozen news web sites. Even better, there are certain news topics in which I’m interested, and everything that has happened with those topics is already going to be aggregated and available. If I want to know what my friends have to say, what Clive Thompson has been thinking about, what new movies have been released, or just about anything else, I drag the appropriate feeds down in my aggregator. Sure, if the article has more to say than is provided in the summary, I’ll go visit the web site for the full article, complete with images, advertising, and links to other sites I may visit (and depending on what I find possibly add to my aggregator and never visit directly again). For me, the web as it was when I started using is dead. It is a database too big to use directly; the only future I see is in an abstracted interface to the web, a metacontent portal if you will.
I was doing some more thinking about this yesterday while talking with a friend, and came to some further realizations. I don’t know that I will ever pick up a paper newspaper again. Sure, if I’m off doing something where I’m not going to be touching computers or the Internet (like on vacation), I may read a real physical newspaper, but probably not (why seek out depression while on vacation). The rest of the time, I take a first cut of my news directly by aggregating a few news sources that I more or less trust. What has surprised me is that I really don’t look at these as much as I thought I would, really only hitting the core “news” feeds when I get bored.
Instead, I read the news through yet another layer of abstraction — but this time a human one. In my experience, you read the news you are interested in indirectly, because the people you aggregate read the news that interests you. This expands on the metacontent portal idea by realizing a virtual layer on the front-end of the process. I’m not doing a particularly good idea of describing this; essentially, what I’m talking about is something like this:
Essentially the idea is that you have content on the left. This can be the news, weather, sports scores, stock prices, whatever. It can also be content such as blog entries, journal entries, audioblogs, photoblogs, and the like. For some of these things, you want to directly aggregate the data. For example, the weather, stock prices, and entries from your friend’s journal are something that it makes sense to just directly receive through the direct aggregation layer.
On the other hand, directly aggregating something like sports scores is hard to do without being overwhelmed. For starters, you can use services like feedster and pubsub to bring down news stories for specific topics you define. This would make it easy to just bring down major league baseball scores, or to grab news specific to the Cosworth racing program. Similarly, you can use specific feeds from news sources that cover topics that you are looking for (such as the Reuters oddly enough feed). This sort of aggregation I term the filtered aggregation layer. It merely provides a mechanical pass through the content that is out there. Nothing more.
But the filtered aggregation layer falls short. This is true namely because it’s nearly impossible to actually enumerate all of the stories you could possible be interested in. It’s easy to make queries too wide or too narrow, but getting them just right I propose is impossible for all but the most trivial examples. Enter the human aggregation layer. If you are aggregating the blogs of people that have similar interests as you, and of people that interest you, you will receive the content you ultimately would like to see organically aggregated by those blogs. Once you reach the appropriate threshold of organic aggregation, you never need to worry about direct aggregation of the news again — an overwhelming task.
In any event, this is all just trying to formalize some stuff that is obvious to anybody that uses an aggregator. Further, I suppose it is trying to formalize how I use an aggregator, which isn’t to say that it’s the appropriate strategy or method for anybody else. Independent of my framework, I think the whole topic is interesting, as there is a lot of innovation going on. The way we use the web is rapidly evolving, and it seems to be returning more to the role of being a content transport rather than a presentation medium, which is really exciting.