The limits to filter feeds and gain more relevance

23.10.2008 | Christian Kreutz

For the past months I have been trying to make the most out of information, therefore I have been experimenting with feeds. No doubt the information power one gains through feeds is great, but the time needed to digest all that information is too long. Unfortunately, there are only a few well working filter mechanisms to get the right information. Clay Shirky talks about it, "It's Not Information Overload. It's Filter Failure."

**ICT4Dfeed experiment **I can best describe my experiences with my small feed experiment around ICT4D themes (ICT4Dfeed). I experimented with Yahoo pipes, Aiderss, Feedhub and many other tools, but I could not reduce the 30 daily resources to less top resources. The only way it worked out was through my own pre-selection which will be published in a twitter ICT4D account. By the way, this has triggered quite some more attention – 80 followers, – whereas the ICT4D feed has around 25 followers. In the case of twitter, there is also the social network factor. But, let me describe my lessons learnt...

Potentials The ICT4D feed has 60 resources such as search engines, social bookmark sites, selected and influential blogs, community websites and organizations. In theory, the reader has to invest around 5 minutes to simply scan the feed or half an hour daily to read the most important articles. In the first case, one is up to date of what is happening, and in the second case, one can even get to know in detail about the latest developments. The 60 feeds fetch a majority of news happening in this particular sector. So, if you are willing to invest the time you can compete with every organization. The old times of information unbalance are over, where only paid service subscribers have an advantage.

Challenges As Chris Brogan would nicely put it in this post, all the interesting comments filters are still to weak. The only chance is to scan all these sources and find the golden nuggets in between them. **Automatic filtering is not working **efficiently yet. I have tried filtering by certain key words but have not been successful at getting better results, although pulling certain tags – for example, in del.ico.us – can be key for a good feed. I also tried services such as Aiderss, which ranks feeds by the amount of comments or links (technorati and delicious). But this works fine in any well connected social media network, for instance in the US, but not on niche topics with valuable content but few links.

Conclusion There is a gain in subscribing to a lot of resources on the Internet and having a well researched and filtered selection of feeds. It surely takes time to find the best feeds, but the results can bring you a lot of value. I am building up such a feed for a magazine publisher at the moment and it is surprising to see how even the old media does not know about the potential of feeds and RSS technology. A draft selection showed me how easily one can monitor the web for interesting topics. But the potential to me really lies in collective filtering. Therefore, in my further experiment I decided to test buzzmonitor to collectively filter resources. Drop me a line if you would like to join me in in that experiment.

If people with the same interests come together, they could share the burden of filtering. Like the nptech tagging experiment of in delicious proves it – by reading the feed you already know about the latest non-profit technology developments. The problems remain, who summarizes this information? This is being done mainly through "Hub" bloggers digesting all the information. In this regard, friendfeed is not helping because it multiplies feeds and does nothing about filtering.