Daniel Harrison's Personal Blog

Personal blog for daniel harrison

To Free Or Not To Free March 5, 2008

Filed under: development — danielharrison @ 12:52 pm

There’s a bit of talk around the free meme at the moment. It’s something I have an interest in academically and with some app’s I’d like to eventually launch, so i’ll add my thoughts. At this stage I’m busy just learning so I plan to revisit in 6 months and still see if I think the same.

First a bit of background. It’s quite an old meme, and there’s been a good deal of commentary but here’s the articles that I’ve liked.

Economic theory has long understood information goods have very different characteristics of physical goods. There’s been very limited chances to test and see this happening in the real world but with the relatively recent arrival of ubiquitous cheap broadband across most of the developed world and the advent of ‘free’ web applications we’re starting to have to consider ‘free’ as a business model. There’s a future shock where the technology starts to catch up to the theory and the ripples are starting to propagate. I suspect there’s some interesting economics PHD’s and masters going on right now :). I think the more disruptive technology is nano/on demand manufacturing which is just getting started, however I think the lessons of ‘free’ around software will help shape the next century and it’s economy in dealing with non scarce and informational goods. If the growth of manufacturing defined the last century I really see this century being defined by information.

So some general traits of information that stand it apart from physical goods.

  • The more it’s used the more valuable it becomes (typically although there’s some interesting implications around signal to noise ratio’s).
  • It’s infinitely reproducible for trivial costs, they behave differently, the more it’s used the less scarce it tends to become.
  • There’s less exclusive access, diminishing scarcity means more people have access and learn about it (watch that noise level rise, reddit?).

There’s some interesting consumer theory changes to take note of under free. Under traditional models relative price changes in goods means that consumers substitute and adopt different goods until they’re equally happy. What happens under ‘free’? Instead of substitution and good selection occurring on the basis of money it shifts to time. People substitute attention, the one thing that is now limited. Scarcity doesn’t disappear but changes to a different factor. This is why I find the semantic web developing meme a harbinger of disrupted markets, at least in the attention and network space. Technologies that can save the consumer time and direct attention are the new old thing. Google case in point, relevant directed advertisements around strong search are all about minimising time to find things. The one thing consumers cannot extend is the amount of time available to them, the steady metronome. In the long term we’re all dead!

Until now free services have only existed in relative isolation and physical constraints like travel and shipping have prevented a real explosion in adoption and growth. Libraries are the best example, free to borrow but interchange is still in most cases based on the physical items. Large scale digitisation is scaring the bejesus out of libraries and publishers everywhere because it’s hard to understand how ‘free’ affects them. Until now academic theory only really had gedankenexperiments but we’re starting to reach a point where wide scale adoption and usage of free services are having significant effects on the economy and testable models. It’s still early days, so like any disruptive technologies expect some bubbles and crashes!

Information is cheap to produce but can rapidly increase by use and value via network effects. So in combination with now relative cheapness to launch a startup and network effects of the web, you can rapidly increase the value of the information a startup trades in. Previously under the 1st boom, the cost of launching was tied up in expensive infrastructure and the economy of scale wasn’t quite there. We’re starting to hit a relative glut of cheap processing time which means the sunk cost for pure web startups has plummeted. Suck it and see wasn’t a widely available option. There’s also the opportunity cost coming into play, bootstrapping a startup verses relatively easy money probably didn’t encourage it either. I think software business models are undergoing a disruptive shift. Under enterprise or wide scale software rollouts it was a ‘winner wins most’ type of market. Natural monopolies were the norm which meant preventing access and not sharing had a strong economic benefit for the company that was currently on top. I suspect we’re starting to see a move away from this. Companies can now gain a natural monopoly by leveraging and taking account of the change to an informational good, (coopertition maybe?). I suspect Microsoft’s move to be a more open company and trying to change it’s culture with it’s yahoo acquisition isn’t purely about competing with a new batch of upstarts, but a dawning realisation that the next growth area is by leveraging and existing in a market defined by the informational good. When the type of good changes you’re forced to change your model to reflect the new reality. There’s only really so long you can use a business model to drive itself, that only really works if you’re a monopoly. I said before information becomes more valuable the more it is used, so unlocking all the proprietary documents and information held in corporate repositories becomes the next growth enabler for their current markets as well. This is why I like Xobni so much. Outlook inboxes across the world are full of information that has so much untapped value that by unlocking some of that, it’s obvious the value that they can provide so they get Bill Gates demoing their product.

Making it free changes the people who benefit, it’s a shakeup for the market and the market is still figuring out how to even understand the new world order. Previously people and organisations that could extract revenue from restricting or mediating access were the winners, ISP’s, big media, installable software companies. It’s much easier to quantify these kind of models, toll based models are easy to understand and work well for scarcity driven resources. ‘Free’ changes this to content and service authors. Ad networks, publishing tool providers, media providers. Serendipitous discovery and alignment of interests becomes a growth driver of the market itself and for the revenues of the companies that can capitalise on it and quantifying value becomes an exercise in modelling.

The great trouble with the model is how do you quantify the value of attention? Is a service around general attention more valuable than a niche one? Click per action (immediate) more valuable than intention (deferred) and click per add? Is Microsoft onto something with their intention based advertising? How do you possibly value this under traditional models? How much value do you put on the API (opensocial for example) that allows your competitors to leverage your talent and information. By spending money to grow yourself, you grow the market and your competitors. I think boards and traditional company management will struggle with this for quite some time and I don’t see any easy answers. It’s a bit gut feel which is why the risk takers at the moment are the ones who can feel the value rather than quantify it. Typical of paradigm changes, but definitely biased to those who can let go of, ignore or not care about traditional corporate valuation. I think inevitably it will still be winner wins most but I do think there’ll be a bigger market for niche players, maybe more an oligopoly than natural monopolies. Maybe this will change over time and is a symptom of just an immature market. Maybe opensocial and integration API’s would never have even been a consideration if the market was not still growing. That’s not to say the market is likely to start to plateau any time soon. My guess is it’s just scrambled up the other side of the chasm in the adoption curve. As I see it at the moment, the startups that are most likely to succeed are those that can introduce or develop new approaches of extracting value from information, new and existing. This was always the case but it’s now much cheaper to do it with a quicker payoff. Would be interesting to try and quantify this.

Not every company and application benefits from free access to information though. The companies that can leverage attention into quantifiable measures, ad networks, analysts, support will benefit most. Where it’s not possible to quantify or too disruptive to the service I expect these applications to be offered as loss leader ancillary services or left as niche providers. Eg If you use gmail as your webmail clients, what are the chances you use google search primarily. Companies that feed on attention, can achieve economies of scope and maximise time for people naturally gravitate to free . Google, Yahoo, RedHat, Automattic, all are able to convert attention to dollars using strategies such as corporate support, ad networks and ‘professional’ upgrades to generate revenue.

For attention, free is like gravity. Why? If something increases in value the more it’s used then the cost to use it must approach zero to encourage people to build and adopt it. Non free is a barrier to entry that limits the growth and adoption of the service. If wikipedia had cost money or required formal qualifications and expertise (ie) time it would never been as successful as it has (compare it against Britannica). Wikipedia itself is the perfect ancillary service, ie driving attention to relevant and potentially very targeted paid services or goods.

Not all information good producers and providers should make their content free though. The more niche the content, the less the need to be free. If the value that the information accrues as more people use it diminishes rapidly, then the benefit of being free diminishes as well. A niche has a much smaller tolerance for noise than something that is for general consumption. Niches also tend to have non transferable information, specific content and solutions don’t lend themselves to application in another niche. The way of ensuring high signal to noise ratios noise is to boost the strength of the signal or reduce the noise. Where something needs a strong signal, which can be achieved by paid researchers, analysts or specific solutions; the large and non transferable costs are more efficiently recouped by having a paid service. Non free information services reduce the noise by introducing something that responds like an ordinary good and responds to scarcity, money. Perception of value is a very important thing. People are less likely to contribute noise as they devalue their own investment by doing so. Pricing is a very important signal to a niche, too low and noise becomes too high devaluing the channel for all and when priced too high reduces the volume of the signal. That’s not to say a general solution can’t be applied to a niche. Ning is a case in point, niche social networks using a general solution. I would say though that the majority of ning networks are still pretty broad. For really specific niches where the value that can be accrued is highly vulnerable to noise, a paid service may be a better option.

I don’t think you need to defend free. In some cases it’s appropriate in others it’s not. It’s counter intuitive for most people and it’s not unexpected as information goods behave differently than any other good people have experience with. A lack of scarcity is hard to understand and comprehend when nothing physical behaves this way.

Hopefully this hasn’t bounced around too much, but I’m trying to bring together a few different strings in my head and understand if there’s a market for an idea I have.