Daniel Harrison's Personal Blog

Personal blog for daniel harrison

Web Reading and MultiTouch, Speed Matters. July 15, 2010

Filed under: development — danielharrison @ 11:24 pm
Tags: ,

I’ve noticed lately I’ve spent alot of time reading my rss feeds on my iPad rather than using my computer and browser.   I think it’s all down to one thing; mutlitouch allows reading/action user experiences to be much faster than using the mouse.   I have about 600 subscriptions that on average give me between 500 and 1000 articles a day.   My usual routine was to get up and as I worked through my email and first tea of the day go through my feeds each morning.  My read/action loop was to scroll through a category or news feed; as I see articles I’d open in a new tab; when I finish the category I’d read the content in each tab, bookmarking/adding to delicious, then closing the tab until I navigated back to the reader tab.  With this patttern of usage it’d typically take about half a day (7-12) to go through and scan for interesting content through my entire feed collection (about three teas).   It wasn’t really possible to sit down and do it  a single period as it would take up too much of a block of time in my morning which meant I’d pick 5-15 minute intervals.   My reader of choice is google reader, when using the mouse the typical navigation pattern means moving around quite a lot; navigating between links in the article, scrolling through articles and selecting folders.  Each of these actions takes time, tens to hundreds of milliseconds, but that time cumulatively adds up.  This is Fitt’s law world where the mouse moves round the screen selecting targets, each action consuming time which adds to the time to process and digest the content.   It also meant I felt navigation took a higher cognitive load which meant less capacity to focus on the actual content.

Why I think the iPad for reading is faster regardless of reader is that suddenly having multiple points of action with less requirement for movement makes it possible to execute the read action loop much faster.  A simple flick anywhere on the right will scroll the content, and the left or top bars with action controls. means it’s now possible to navigate through my entire feed collection in about half an hour if not faster.    The highest cost movement for an action under Fitts law and using a mouse, is to have to move the mouse diagonally across the screen to hit a target at each corner.  This is the maximum distance the mouse has to travel which means that it’s harder to hit targets and adds to cognitive load to use the app and perform actions.   So under multitouch while there’s still a cost to identifying targets and actioning them there’s magic in that control can be transferred to the opposite hand, in that way instantly more than halving the amount of time to action an intent on the opposite side of the screen when using two hands.    So on average it takes less time to navigate and perform actions which speeds up the reading/scanning action loop which means it’s taking me significantly less time to process an increased amount of information.  Correspondingly I’ve found I’m even less picky about adding new feeds to my reader which is increasing my knowledge sources and my ability to increase signal to noise and identify the truly valuable information to improve my work.

So here are my key takeaways.

  • Giving users the ability to scan/process/action faster actioning always wins.
  • Fitt’s law which suggest good size buttons for targets is still relevant, however multitouch has significant implications in UX design.
  • Ipad size devices with multitouch are great for reading and processing information streams.

Balsamiq Mockups Blog August 30, 2008

Filed under: development — danielharrison @ 5:16 pm

I came across the Balsamiq Mockups product a while ago from a link off the Atlassian software blogs.  I think it’s a really well thought out and executed product.  My personal thoughts are when designing a product you can end up focusing too much on a specific design rather than the underlying concept and their mockups product is a great tool to avoid this.    It has a crude-ish hand drawn view of mockups and doesn’t pretend to be pixel perfect layout and representation.   It’s the perfect mockups tool and encourages the focus on the UX and concepts rather than any specific rendering.   I think in product development you can get a chasm between people who have to implement the system, and the ui designed in a graphics tool.  At least I’ve found this (in more than one organisation) can devolve into focusing on specific looks rather than critically examining the user experience of a system.  I think you can use wireframes but then this often ends up going the other way with not enough intent conveyed.  I think the mockups product is the perfect balance; not too specific, but built around designing, exploring and conveying intent and the user experience.

I was also dubious and highly skeptical of AIR and flash apps but i’ve been really impressed with the leveraging of flash into both a standalone desktop instance and integration into various enterprise wiki’s with a product that just nails it.  We use mediawiki at my work and i’d love to see some support 😉

Anyway the blog is at http://www.balsamiq.com/blog/ and it’s definitely worth following both for software development and micro isv wisdom.


Why I Like Tuesdays as a Weekly Release Day August 5, 2008

Filed under: development — danielharrison @ 11:07 pm

One of the things that keep a project iterating once it reaches a certain point are regular releases.

I like weekly releases, long enough to allow for a decent chunk of work, but short enough to get early feedback and not ‘go dark’.

As part of this I like trying to get the build out every tuesday, why

  • Elimination wise, Friday is not so good because people are coming down after a week, having to stay late to get the build out on a Friday night is the last thing people want to be doing.  If things go wrong the only option is to have people stressing about the build all weekend or work on the weekends to get it done.
  • It allows for Monday to be a clear the decks day with everyone fresh and refreshed to work on making sure everything’s solid and stable with a view to other people using it.
  • If allows for the majority of the work week to be dedicated to getting the next set of functionality done.
  • Monday’s usually planning and figuring out what the next iteration needs/will be so better not to be tied up in assembling and testing.

20/20 Vision April 18, 2008

Filed under: development — danielharrison @ 4:53 pm

This weekend downunder there’s a conference called Australia 2020. We have a fresh new government and a group of luminaries will meet in Canberra to discuss what Australia’s future should be.

Will this actually result in anything? The cynic in me, says no. With the range of idea’s that come out it becomes an opportunity to cherry pick those ideas that align with the party of the days policies and then flag that it was done with full community consultation. But maybe I’m wrong. It just strikes me that high level visions are meaningless without a vision as to specifics and how to get there. Maybe the conference needs a followup with how this should be achieved and what it looks like in specifics. I’ll be interested to see the outcomes and see if they devolve into how we’re going to achieve that vision.

I think politicians have a hard task nowdays, vision means putting a concrete view of how things should be up for discussion which provides an immediate option to kick in the media machine on how it’s not practical, it’s not shared by the community, it’s too expensive, … . My feeling generally is the contrarian view is often over reported as groups of people vigorously agreeing doesn’t sell well. For parties to have a specific strong vision as to how something should be done and achieved becomes in a lot of cases a political liability.

I think this tends to self select large visions that are non controversial, eg World Peace, meaningless platitudes. Specific visions mean it can be cut down. It’s possible to discuss it in specifics. The previous Howard government had a vision of workplace reform but the specifics are what eventually brought them down. The high level vision was sold but the vision of how this was to be achieved was not, so when it came and was delivered it shocked alot of people who felt that it unfair and cheated.

Shipping products I think actually has a lot to learn from politics. A high level vision doesn’t a shipping product make. It’s the first step, not the end result. Getting to the stage of a shipped product is ultimately all about the people. If the team doesn’t have a vision and someone that can drive the vision then I think the product is doomed to failure.

Specific vision is hard though. Leadership means not only the ability to have a vision but to get people to adopt the vision as their own and deliver on it. This is a lesson I’ve seen in action and been learning from my companies CTO. (He put me onto the Silicon Valley Product Group articles which has good writing on leadership and people).

Vision is pretty much the thing that turns a small startup into a successful company but is also something that can break it. When a company’s small and a couple of people the vision is strongly shared. It has to be, the reduction of income, long hours mean that the vision is what drives the ability to make sacrifices. When the company starts to grow and acquire employees they’re not required to adopt the vision, you can’t put it in the contract. Vision I think is like one of those communication node problems, for each extra person the vision has to communicate through, the less it is shared and adopted . I hypothesise it’s one of the reasons to keep a flat management structure in product shipping companies. Companies that are successful can ultimately get a group of people to adopt a vision. That makes things happen. People are enthused and do what needs to be done to deliver on it. If the management or the founder can’t get people to adopt and indeed grow the vision then I think it’s ultimately going to be an unsuccessful company.

I was previously a consultant, senior J2EE consultant for a large multinational services company. I left after about 2 years. I was onsite, the applications I was team lead for were interesting, had a vision which I helped deliver with some cool technology, but ultimately I felt that the company as a whole didn’t have a vision (apart from make money), or at least I wasn’t part of it. I think this is a common theme of some of the developers we have in the company I work for. Refugees from services/consulting/contracting in search of a vision. You give up the consulting and contracting rates but you gain a vision and the ability to shape and grow it.

At my work at the moment we’re trying a change to the product development process. Basically it’s a quick spec with some high level scenarios framing the problem. When this actually becomes concrete, we’ve decided it’s worth doing and the resources available we get all the developers, testers, analysts, managers in the room and storyboard and brainstorm ideas. Iterating between the solution and prototyping and more brainstorming. It’s all about building a shared vision and team. The people in the room must be the people doing the work. There’s no over the fence with a solution. The solution and innovation comes about as the team develops the vision for solving the problem. Sounds obvious but getting this to work smoothly can be very challenging. Smart, passionate people mean that disagreement is a natural and expected outcome. Harnessing that into a shared vision that can be focussed like a laser can be a tricky problem. The joke at the moment is well at least we don’t have a problem with passion.

The problem was that all previously this was too disjoint, people would come late to the party and didn’t share the same vision or thought it should change. Because they didn’t own and share the original vision it meant a lot of rework and too much of a over the fence culture which meant while it was a team, there was too much of the developers, testers, documenters, analysts subteams rather than a single focussed team solving a shared problem. Rework for what should be done was too high and meant friction over the vision.

It will be interesting to see how this goes and it’s effectiveness building and shaping vision. The same can be said of the 2020 summit.


Don't Make Me Think – RSS Feeds

Filed under: development — danielharrison @ 3:22 pm

I have 270 feeds I follow, working my way through the full list about every 2-3 days.

After the great book, there’s a couple of things where there’s too much thinking needed.

  • A site having a feed where it isn’t automagically detected by Firefox.  This makes subscribing a task of hide and seek trying to find the rss icon which can be anywhere.
  • Also if there’s separate rss feeds for sections, on that section include subscriptions for both the section and the full feed.  I often find I have to click around to move up the section tree so I can find the full feed.
  • Non full content feeds.   I understand some sites monetise content but rather than opt for partial content, monetise the feed as well.  RSS is all about giving me the ability to consume the web my way.  I don’t mind adds but if I have to click around and leave my feedreader then it breaks my flow.

To Free Or Not To Free March 5, 2008

Filed under: development — danielharrison @ 12:52 pm

There’s a bit of talk around the free meme at the moment. It’s something I have an interest in academically and with some app’s I’d like to eventually launch, so i’ll add my thoughts. At this stage I’m busy just learning so I plan to revisit in 6 months and still see if I think the same.

First a bit of background. It’s quite an old meme, and there’s been a good deal of commentary but here’s the articles that I’ve liked.

Economic theory has long understood information goods have very different characteristics of physical goods. There’s been very limited chances to test and see this happening in the real world but with the relatively recent arrival of ubiquitous cheap broadband across most of the developed world and the advent of ‘free’ web applications we’re starting to have to consider ‘free’ as a business model. There’s a future shock where the technology starts to catch up to the theory and the ripples are starting to propagate. I suspect there’s some interesting economics PHD’s and masters going on right now :). I think the more disruptive technology is nano/on demand manufacturing which is just getting started, however I think the lessons of ‘free’ around software will help shape the next century and it’s economy in dealing with non scarce and informational goods. If the growth of manufacturing defined the last century I really see this century being defined by information.

So some general traits of information that stand it apart from physical goods.

  • The more it’s used the more valuable it becomes (typically although there’s some interesting implications around signal to noise ratio’s).
  • It’s infinitely reproducible for trivial costs, they behave differently, the more it’s used the less scarce it tends to become.
  • There’s less exclusive access, diminishing scarcity means more people have access and learn about it (watch that noise level rise, reddit?).

There’s some interesting consumer theory changes to take note of under free. Under traditional models relative price changes in goods means that consumers substitute and adopt different goods until they’re equally happy. What happens under ‘free’? Instead of substitution and good selection occurring on the basis of money it shifts to time. People substitute attention, the one thing that is now limited. Scarcity doesn’t disappear but changes to a different factor. This is why I find the semantic web developing meme a harbinger of disrupted markets, at least in the attention and network space. Technologies that can save the consumer time and direct attention are the new old thing. Google case in point, relevant directed advertisements around strong search are all about minimising time to find things. The one thing consumers cannot extend is the amount of time available to them, the steady metronome. In the long term we’re all dead!

Until now free services have only existed in relative isolation and physical constraints like travel and shipping have prevented a real explosion in adoption and growth. Libraries are the best example, free to borrow but interchange is still in most cases based on the physical items. Large scale digitisation is scaring the bejesus out of libraries and publishers everywhere because it’s hard to understand how ‘free’ affects them. Until now academic theory only really had gedankenexperiments but we’re starting to reach a point where wide scale adoption and usage of free services are having significant effects on the economy and testable models. It’s still early days, so like any disruptive technologies expect some bubbles and crashes!

Information is cheap to produce but can rapidly increase by use and value via network effects. So in combination with now relative cheapness to launch a startup and network effects of the web, you can rapidly increase the value of the information a startup trades in. Previously under the 1st boom, the cost of launching was tied up in expensive infrastructure and the economy of scale wasn’t quite there. We’re starting to hit a relative glut of cheap processing time which means the sunk cost for pure web startups has plummeted. Suck it and see wasn’t a widely available option. There’s also the opportunity cost coming into play, bootstrapping a startup verses relatively easy money probably didn’t encourage it either. I think software business models are undergoing a disruptive shift. Under enterprise or wide scale software rollouts it was a ‘winner wins most’ type of market. Natural monopolies were the norm which meant preventing access and not sharing had a strong economic benefit for the company that was currently on top. I suspect we’re starting to see a move away from this. Companies can now gain a natural monopoly by leveraging and taking account of the change to an informational good, (coopertition maybe?). I suspect Microsoft’s move to be a more open company and trying to change it’s culture with it’s yahoo acquisition isn’t purely about competing with a new batch of upstarts, but a dawning realisation that the next growth area is by leveraging and existing in a market defined by the informational good. When the type of good changes you’re forced to change your model to reflect the new reality. There’s only really so long you can use a business model to drive itself, that only really works if you’re a monopoly. I said before information becomes more valuable the more it is used, so unlocking all the proprietary documents and information held in corporate repositories becomes the next growth enabler for their current markets as well. This is why I like Xobni so much. Outlook inboxes across the world are full of information that has so much untapped value that by unlocking some of that, it’s obvious the value that they can provide so they get Bill Gates demoing their product.

Making it free changes the people who benefit, it’s a shakeup for the market and the market is still figuring out how to even understand the new world order. Previously people and organisations that could extract revenue from restricting or mediating access were the winners, ISP’s, big media, installable software companies. It’s much easier to quantify these kind of models, toll based models are easy to understand and work well for scarcity driven resources. ‘Free’ changes this to content and service authors. Ad networks, publishing tool providers, media providers. Serendipitous discovery and alignment of interests becomes a growth driver of the market itself and for the revenues of the companies that can capitalise on it and quantifying value becomes an exercise in modelling.

The great trouble with the model is how do you quantify the value of attention? Is a service around general attention more valuable than a niche one? Click per action (immediate) more valuable than intention (deferred) and click per add? Is Microsoft onto something with their intention based advertising? How do you possibly value this under traditional models? How much value do you put on the API (opensocial for example) that allows your competitors to leverage your talent and information. By spending money to grow yourself, you grow the market and your competitors. I think boards and traditional company management will struggle with this for quite some time and I don’t see any easy answers. It’s a bit gut feel which is why the risk takers at the moment are the ones who can feel the value rather than quantify it. Typical of paradigm changes, but definitely biased to those who can let go of, ignore or not care about traditional corporate valuation. I think inevitably it will still be winner wins most but I do think there’ll be a bigger market for niche players, maybe more an oligopoly than natural monopolies. Maybe this will change over time and is a symptom of just an immature market. Maybe opensocial and integration API’s would never have even been a consideration if the market was not still growing. That’s not to say the market is likely to start to plateau any time soon. My guess is it’s just scrambled up the other side of the chasm in the adoption curve. As I see it at the moment, the startups that are most likely to succeed are those that can introduce or develop new approaches of extracting value from information, new and existing. This was always the case but it’s now much cheaper to do it with a quicker payoff. Would be interesting to try and quantify this.

Not every company and application benefits from free access to information though. The companies that can leverage attention into quantifiable measures, ad networks, analysts, support will benefit most. Where it’s not possible to quantify or too disruptive to the service I expect these applications to be offered as loss leader ancillary services or left as niche providers. Eg If you use gmail as your webmail clients, what are the chances you use google search primarily. Companies that feed on attention, can achieve economies of scope and maximise time for people naturally gravitate to free . Google, Yahoo, RedHat, Automattic, all are able to convert attention to dollars using strategies such as corporate support, ad networks and ‘professional’ upgrades to generate revenue.

For attention, free is like gravity. Why? If something increases in value the more it’s used then the cost to use it must approach zero to encourage people to build and adopt it. Non free is a barrier to entry that limits the growth and adoption of the service. If wikipedia had cost money or required formal qualifications and expertise (ie) time it would never been as successful as it has (compare it against Britannica). Wikipedia itself is the perfect ancillary service, ie driving attention to relevant and potentially very targeted paid services or goods.

Not all information good producers and providers should make their content free though. The more niche the content, the less the need to be free. If the value that the information accrues as more people use it diminishes rapidly, then the benefit of being free diminishes as well. A niche has a much smaller tolerance for noise than something that is for general consumption. Niches also tend to have non transferable information, specific content and solutions don’t lend themselves to application in another niche. The way of ensuring high signal to noise ratios noise is to boost the strength of the signal or reduce the noise. Where something needs a strong signal, which can be achieved by paid researchers, analysts or specific solutions; the large and non transferable costs are more efficiently recouped by having a paid service. Non free information services reduce the noise by introducing something that responds like an ordinary good and responds to scarcity, money. Perception of value is a very important thing. People are less likely to contribute noise as they devalue their own investment by doing so. Pricing is a very important signal to a niche, too low and noise becomes too high devaluing the channel for all and when priced too high reduces the volume of the signal. That’s not to say a general solution can’t be applied to a niche. Ning is a case in point, niche social networks using a general solution. I would say though that the majority of ning networks are still pretty broad. For really specific niches where the value that can be accrued is highly vulnerable to noise, a paid service may be a better option.

I don’t think you need to defend free. In some cases it’s appropriate in others it’s not. It’s counter intuitive for most people and it’s not unexpected as information goods behave differently than any other good people have experience with. A lack of scarcity is hard to understand and comprehend when nothing physical behaves this way.

Hopefully this hasn’t bounced around too much, but I’m trying to bring together a few different strings in my head and understand if there’s a market for an idea I have.


The CookBook Approach To Making Up For Missing Documentation February 26, 2008

Filed under: development — danielharrison @ 4:34 pm

The last bunch of people working on the code were a bunch of crack smoking monkeys. It’s landed in your lap like a wet cat and it’s your job to pull it all together, make it work and ship it yesterday. The entire documentation for the project is a patchy user manual and the people responsible who know what it’s supposed to do have moved on after being told that maybe they’d be more suited to something like, well, knitting or demolition.

I’m going to detail a strategy that I’ve used a couple of times now successfully to help fill in the cracks. I figured it’s probably worth writing up so I remember it and can point it out anonymously. Look, it’s on the internet, it must be authoritative. It won’t give you pristine documentation but I’ve found at least, it will give you a basis to build on for the next few releases. If you’ve got nothing and you can’t stop and write it all up (who can?) you’re not going to be able to fix it all in a single release. You can plan to fill it in though and see the light at the end of the tunnel. First of all you’ve got a project wiki right? Something you and the team can write informal documentation on? If you haven’t go download and install somewhere, and come back later. Ok so on to the story.

My grandma had a cookbook. It wasn’t a hard cover printed manual with a picture of an attractive (or not so attractive) chef gracing the cover. It was a behemoth. It had escaped the binder that contained it. It had devoured other cookbooks. When my grandmother had particularly liked something she had liberated it from it’s uniformity in a published cookbook, ripped it out and crammed it in there. There were handwritten recipes in an unfamiliar hand that I’m sure she had brought from distant ancestors. It had food stains on it. Maybe she thought if you couldn’t read it any more at least the food stains would give an indication of the ingredients. An early form of scratch and sniff maybe? Here’s the rub though, it was disorganised and in a lot of case incomplete but she knew it back to front and in between. Based on her knowledge of the world she could use it to build on and serve consistently high quality meals. When someone at dinner said ‘I must get the recipe for this’. She would know exactly where the recipe was, she’d take it out, rewrite it filling in all the skipped steps and then someone else could make something great.

So nice story? but how does this relate to code and documentation? I’m going to suggest you start building a cookbook. This is where the wiki comes in. A cookbook isn’t complete. It doesn’t contain the thousands of variations on macaroni and cheese, but it does usually have a single recipe which captures the intent.

So in this context, what’s a recipe. Well the way I’ve thought about it is a recipe in a application context usually covers a simple functional area. Eg employee mangement. It will have a UI usually for creating, editing, deleting and updating. So it has a datasource which in most cases is hooked up to a db and may encompass a database and maybe some stored procedures. There’s probably some data integrity constraints or some interesting behaviour. It may be related to another recipe, eg organisation hierarchy. It may have some gotchas, deleting a employee only sets an additional field indicating they’re no longer active but doesn’t actually delete them. There’s no hard and fast rule but you need to come up with something people can add to and build on so that you can start filling in what the system does. I usually create a really simple template.

  • Functional Area Description – what does it do, who uses it, when do they use it
  • UI – Basic sequence, create with this form, delete with this form, this validation sequence is important
  • Data – These are the tables that are involved, this uses this specific algorithm
  • Gotchas – General things that might trip someone up. The worse the code the more important this is.

For a first release 50-80% coverage is good. You start to build an understanding of the system. Future specifications actually have a basis and documenters, testers, developers can start adding and reading it. You get a bit more time on the next release because some of it’s already documented and you’re not starting from scratch. You start organising it into categories and breaking it down a bit more. You start adding things like schema diagrams, system UML sequences and data integrity documentation, glossaries, table of contents. Eventually you end up with pretty good documentation. I ‘ve found that development actually gets faster, when someone asks or someone new joins the answer is, it’s on the wiki in the cookbook.

This is a strategy I’ve found helpful, it’s not perfect but it’s often a good start and a good conversation a team can have with familiar concepts.