Carrying on with the same theme as yesterdays post – the fact that content is becoming disaggregated from a web sites home page – I read an excellent blog post today: What if Quality Journalism isn’t? In this, the author looks at the seemingly divergent claims from the New York Times, who claim:
- They are “winning” at Journalism
- Readership is falling, both on web and mobile platforms
- therefore they need to pursue strategies to grow their audience
The author asks “If its product is ‘the world’s best journalism‘, why does it have a problem growing its audience?”. You can’t be the world’s best and fail at the same time. Indeed. And then goes into a deeper analysis.
I like the analogue of the supermarket of intent (Amazon) versus a supermarket of interest (social) versus Niche. The central issue is how to curate articles of interest to a specific subscriber, without filling their delivery with superfluous (to the reader) content. This where Newspapers (in the authors case) typically contain 70% or more of wasted content to a typical specific user.
One comment under the article suggests one approach: existence of an open source aggregation model for the municipal bond market on Twitter via #muniland… journos from 20+ pubs, think tanks, govts, law firms, market commentators hash their story and all share.
Deep linking to useful, pertinent and interesting content is probably a big potential area if alternative approaches can crack it. Until then, i’m having to rely on RSS feeds of known authors I respect, or from common watering holes, or from the occasional flash of brilliance that crosses my twitter stream at times i’m watching it.
Just need to update Aaron Swartz’s code to spot water-cooler conversations on Twitter among specific people or sources I respect. That would probably do most of the leg work to enlighten me more productively, and without subjecting myself to pages of search engine discovery.