Steve Gillmor's article about web aggregators:
To begin with, we need to harness the information we already possess about who and what we read. Rather than relying on content creators to signal already consumed material, let's let the RSS aggregator (offline or online) filter out the links, but not the supporting commentary, to already consumed posts. Instrumenting the browser to record what is read, in what order, and for how long is trivial, says Adam Bosworth, in the context of his Alchemy caching architecture.Next, let's incent that cache, mirrored on both server and client, to save posts that appear of interest or import not just to me but my peers on the network, as represented by the RSS feeds that I and they are subscribed to. If Jon Udell, Dave Winer, Doc Searls and 70% of their subscribers find the RSS BitTorrent thread compelling, then please send a message to my cache engine not to throw that post away, no matter whether I have ever heard of the poster or the horse they rode in on, the idea he or she is promoting.
Next, compare all the posts and posters and produce a weighted priority list that takes into account variables such as author, subject, updates, Technorati cosmos tracking, the amount of time I have before the next meeting on my calendar, and so on, producing a post rank based not just on my attention but the attention dynamics of those I choose to do my filtering with and for me.
That should be incorporated as a roadmap for rss4you.com, the aggregator roby and I did!!!