Daniel Harrison's Personal Blog

Personal blog for daniel harrison

Things to watch out for in HTML5 IndexedDB as at 21 June 2011 June 21, 2011

Filed under: development,internet,javascript,web — danielharrison @ 6:59 am

I’m between contracts at the moment so taking the opportunity to play with some bleeding edge technology.  With it seeming like everyone’s jumping on the HTML5 bandwagon, even microsoft with windows 8, seemed like a good opportunity to restart my side project playing with the latest web tech.

So there’s a few things to note if you pick up indexedDB..  It’s bleeding edge and to be expected but here’s my experiences over the last week.

IndexedDB is in webkit (chrome) and firefox but not yet in safari.  The database visualisation in the webkit developer tools isn’t linked in yet so you can’t mange the database that way yet.  You can’t delete a database programmatically yet in either chrome or firefox.  If you’re writing unit tests this is going to be a bit of a pain ;).  Also you can’t yet access the indexedDB from webworkers.  At this stage it’s attached to the window.  One of the things I’m playing with is a stemming and text sorting index which was all running via webworkers.  It’s an easy workaround, you just take the results from the webworkers and at a convenient time, merge and store instead of doing it directly.  Still, will be cool when this works.

The other thing I’ve noticed is that it feels very different than other data stores, even other kvp such as cassandra etc.   It really is a javascript data store.   The feeling I get is the asynchronous model is the preferred interaction method which again feels different that other API’s.  I’m still getting the feel, but it feels right for client side javascript.  In my opinion if I had to choose between the sqllite model and this, I’d choose this as a better technology direction for browser based client structured storage.  Sqllite would have just recreated the sql feeling of datastores and I don’t think it would have felt quite right for javascript in the long term.

I’m sure these will be addressed pretty shortly, I’m running chrome alpha and dev channels, ffox 5 and will post back when I notice a change.

Advertisements
 

Options and Tradeoffs for Rich Text Editing in HTML5ish Technologies March 11, 2011

Filed under: development,internet — danielharrison @ 9:22 pm

There’s a number of options for adding rich text editing to your website, all have a number of tradeoffs that will be guided around the amount of control you need.

Content Editable

Content editable is the default solution for text editing on the web.  Originating from Microsoft’s pioneering work in 4.0 browsers all browsers now support the basic API.  It’s the technology behind most rich editors, tinyMCE, YUI editor, CKEditor.  The problem though is that the technology is quite old in internet time and the API doesn’t smell quite right in 2010.  The API isn’t one that will feel familiar to developers familiar with javascript, jQuery etc and dom manipulation.  It lives at a higher abstraction via the document.execCommand.  If you apply the bold command to a set of text it doesn’t return a selection, the new element or set of elements and doesn’t really care about the DOM at that level.  If you do want to take a DOM centric approach you’ll need to attach listeners for node operations etc and get a bit clever about understanding what changed.  Most frameworks mean you don’t really need to care and abstract it away sufficiently that it’s easy to have a competent, performant solution ready in a couple of hours.  The contentEditable technology does address some of the complexity that can arise in complex formatting that if you took a ownership position you’d have to solve.  For example applying bold or converting to a list works on nested content and gets it right enough.  It doesn’t produce what would be considered the cleanest html, eg every paragraph is <p><br><p> (<div><br><div> in webkit based browsers).  It’s the good enough solution and if you’re happy enough to make it a desktop browser based experience and want a quick solution, this solution is the easiest.   You also get things like spell checking for free (most browsers now support this by default).  One extension to contentEditable is to use the selection API.  This tool has facilities to surround content, insert elements at the start of selection etc and manipulate HTML based on user input.  In some ways the selection API is easier to use as it has a DOM based view of the world which makes it much easier to integrate it with bleeding edge technologies like html5 history.

I’ve been keenly monitoring the ADC for news of when content editable will be supported on the ipad with mobile safari but it doesn’t seem like this is a near term priority.  It’s still not supported in the latest 4.3 iOS release.   So contentEditable is ruled out if you’re targeting the iPad; other tablets I’m not so sure of.  To some extent this is not surprising as getting the experience right for tablet devices is going to take some thinking given the experience certainly wasn’t envisiged with tablets in mind.

Bind to a an element, monitor keystrokes, insert into DOM.

The you bought it you own it solution.  The advantage over contentEditable is you can make it work on iPad and other devices that don’t support content editable.  I believe this is the solution that google now uses in it’s docs experience.  If the text editing is a core competency you need to own and if you’re developing a custom solution then this is a feasible option.  It’s alot of work but owning everything gives you great power and it uses standard DOM operations so is well supported by the browsers you’ll care about.  If you’ve got an product where you’re using OT or causal trees to synchronise changes in a collaborative environment, this works well as likely you already have that information to send to the server to synchronise user edits anyway.

Canvas

Canvas is the newest technology you can implement text editing with.  This is another solution where you need to own the whole stack, monitor keystrokes and insert glyphs.  Canvas is fast; very fast, which makes doing things like displaying graphics a very fluid experience in modern browsers.  It has a pixel coordinate system which gives you fine grained control over everything, even more so than any html generating example.  My early prototypes did raise a blocker that ruled it out for me though.  The canvas API uses methods like fillText to write text and measureText to determine the space it’s going to take.  One of the core features of a text editor is that it requires overlay of a cursor to indicate position of active editing.  The problem is measureText only works reliably on fixed width (monospace) fonts.  This is why it works in programming environments like Bespin/SkyWriter which uses code oriented monospaced fonts.  The measureText gives you the width in pixels.  When using a proportional font this width will not be consistent due to aliasing and the proportional algorithms that make it look pretty on your screen.  For example with the term ‘cat’.  Measuring ‘cat’ will give you the width of the whole word.  If you want to shift the cursor to between the a and the t you’ll need to know how much space ‘ca’ takes of the whole word.  Due to the calculation (particularly if you start worrying about bold and italics) the measureText of ‘ca’ will include a few extra pixels to account for the fact that a is now the end letter of a word.  So for measureText it’s the total space to print out ‘ca’ as a word including all styles applied to the font and padding at the end letter.  If you wanted to overlay a cursor next to the ‘a’ in ‘cat’ using measureText to calculate where the a ended, then by default you’d end up with the cursor sitting in the ‘t’ somewhere.  Obviously being off a few pixels matters in the UI.  As the calculation of proportional fonts is quite complex and goes into low level technology, in order to determine a feasible cursor position more information is needed than is currently available.  In proportional fonts particularly when dealing with italics, letters technically overlap, eg. /la/ the l actually pushes into the top space over the a depending on the font, so where should the cursor go?  At the end of  the l or at the beginning of the a (beginning of the a, on top of some of the l).  The obvious solution would be to add this information to the API so that it can record where letters start and end and their general dimensions.   That said given the non accessibility of canvas and the fact it’s not meant to be a text editing environment, there’s good reasons why the API designers probably don’t want to facilitate this madness.   There are hacks of course to figure this out.  I played with writing the text to a white background, getting the written text as an image and then using pixel sampling to determine where the letter really started, yuck!  It’s a lot of work and when you care more about the input over absolute control for display, contentEditable or rolling your own direct dom manipulation solutions are the quickest and easiest path.

 

Playing with Cassandra Again September 30, 2010

Filed under: cassandra,development,internet,Uncategorized — danielharrison @ 12:59 am

I’ve been recently playing with the latest version of Cassandra again.   Some new things going in the direction I like is that it seems to be growing into a more enterprise keystore model rather than something that is solving a specific high volume websites requirements only.  To me it felt like there’d been a lot of work in beefing up the management and making it solve a more generic problem.  The programmatic adhoc schema editing was a good improvement and based on the direction, 1.0 is shaping up to be really good.

My previous access code was using the thrift API directly.  For this prototype I tried out a few libraries; Pelops and Hector.  Both seemed to still be thrift focused and I’m not sure how this works with the change to AVRO.  Thrift always felt clumsy to me.  Technologies like thrift and avro, where you’re expressing a language independent communication protocol that various languages need talk in, in my view can’t help bleeding those idioms and generality up to the client.  It means client access code often feels, well slightly awkward.  It feels a bit like the good old days with IIOP/CORBA and EJB communication.  My personal preference is targeted hand coded adapters which feel like a good fit for the language, but the downside of course is that the clients can lag and not always be available for your choice of language.  So it’s a tradeoff as always.  Hector seems like it’s actively trying to avoid this but still has wrappers where if feels a bit thrifty, eg HKSDef instead of KSDef used to create a keystore.  If you are trying out and evaluating these libraries I would highly recommend you bite the bullet and just get the cassandra source for your targeted library and build it yourself.  Due to the fast moving nature it looks like the current releases are out of date and to get it working you really need the latest trunk versions of everything.  For example I don’t think beta2 of 0.7 cassandra is available as a package but it seems to be required with the current version of Pelops and Hector, Pelops is source only on github, so you’ll likely be building things yourself anyway.  I was impressed by both and it feels like there’s alot of room for future improvement and both seem to be shaping up as strong client access libraries.

Another good thing is that it seems like there’s some valuable resources coming through.  At the moment it’s a lot of google and reading the forums to nut out problems.  I bought the ‘Cassandra, the definitive guide’ rough cuts book from Oreilly and it seems like it’s taken a lot of the information, focused it and made it a good source for explanation of idioms and general wisdom.  So my recommendation would be to buy as it seems like it’s going to be an invaluable reference.

My biggest problem for using cassandra at the moment is support for multitenancy.  For the problem I have in mind it requires text indexing and content that is private per account.  With a model like cassandra you need to know what you will be searching for first and basically you build column families representing those indexes.  Now in my case I have users, accounts (many users), objects (storing text) and various indices around that text that drive my application.  Think a little bit like an RDF store with accounts and users.  Now in a traditional database model I would probably store this as a separate database for each account.  This may mean each running datastore instance has 10’s to 1000’s of databases.  With cassandra and the way this is structured this would not be advisable.  Each keystore maintains memory etc and to take advantage of it’s model of replication etc it’s more advisable to have less keyspaces.  Now one of the easy wins in the database server world of having separate databases per account is you’re guaranteed to not see other accounts data, you’re connecting to the datastore for that client which makes it very easy to guarantee and maintain security.  Under cassandra this makes it an application concern at the moment.  For my prototype I wasn’t happy with the extent that this was invading my code and required extra indices to make it all work, all of which increased the cognitive load of developing the application.  There’s work afoot around multi-tennancy requirements, but until that’s addressed, for me at least, it rules cassandra out.  The cassandra team are working on it and there’s some interesting proposals (the namespace one seems interesting) and I’m sure once it’s complete it will really make cassandra the first choice for an enterprise keystore.

 

Wave Good Bye August 5, 2010

Filed under: collaboration,development,internet,Uncategorized — danielharrison @ 3:44 am

It looks like google wave’s been sent to the knackers.  It was an ambitious product trying to change the technology we use to collaborate.  I’m sure we’ll see it come back in various products but as a standalone product it looks like it won’t be around any more.  I remember when it first came out, the general consensus at least in the office I was working in, was; neat technology but what problems can it help me solve, is this really that much better than email?  There’s been alot of casualties in the groupware space and I guess google wave is another victim in the war on collaboration.  The current email communication hegemony seems like ripe pickings for disruption; technology stack from another era, massive implications and cost savings if you can make people more productive etc.

My startup knowtu operates in the enterprise collaboration and communication market which wave kind of did and the lessons I think I see are:

  • Email still rules and will for the foreseeable future.
  • Technology is important but by itself doesn’t solve problems.
  • Good enough wins.

While I think wave had it’s issues, it’s disappointing to see it end particularly as this seemed to have a local Australian connection with a large contingent in Google’s Sydney office.   I always felt over time the tech platform meant neat stuff would be built on top and slowly it would succeed.  Alot of the tech is open source so maybe it will come back at some point in the future, I guess we’ll just have to wait and see.

 

WordPress Theming October 16, 2007

Filed under: development,internet,me — danielharrison @ 12:59 am

Small Potato (should I not be using capitalisation?) created a very useful resource on how to create a WordPress theme.

I found it most useful in understanding how WordPress is hooked together. Having recently joined the blogging fraternity I needed to create a theme but went about it in a slightly different way. If you’re familiar with css, html, php and the like, but like me, just starting writing and itching to be up and running, then this is the alternative technique that I used. I tend to use this approach more generally for any site that’s already got well formed and structured content and treat it as a website refactoring.

First things first, make sure you’ve got firefox, the web developer extension as well as a trusted editor (I use editplus).

Choose your theme, I just switched to the classic which seemed pretty simple and wasn’t going to be too hard to dig through and edit. Add a comment and some sample text you can use that’s going to give you a representative sample, throw in a few paragraphs, lists, bold and italic text so that you’ve got a good sample of what’s going to go up on your blog.

Next download the theme or just copy the existing one and rename it to your site so you can make changes locally and offline, eg Classic -> knowtu in my case.

I didn’t particularly care that I would have an incomplete site that’s available for a few days so I could do this live and didn’t really need a local wordpress install or config. If you do, Small Pototoes’s guide links through on how to do this. I think from memory in Suse for example it’s available as an app to install via the yast and being a php app it’s pretty straight forward anyway.

First step, save the page using firefox’s save page as, save it as ‘Web Page Complete’ to a suitable location.

Save Page as in firefox

Secondly, Open the content in firefox and your editor and include a reset.css. Why? well once everything’s on a level playing field it’s much easier to go forward as well as being closer to deterministic. I first came across the reasons for a reset css via an msdn blog. Secondly change the existing style.css reference to a local one. Finally remove the content from the style.css. Viewing the page in firefox should show a pretty blank page with all fonts the same (including headings) which is exactly what you want as the next step is to start moving and sizing things and you want a blank slate to go nuts.

Change styles to local and include a reset.css

Start looking at the content and figuring out how it all hangs together. I also ran htmltidy over it, as if you want to grok the content, then well formed and layed out is usually easier.

Where the web developer extension comes in, is that it gives you way to edit the sites css and see the changes instantly so you can play around and see how you want things to sit. You can enable it via clicking on the CSS dropdown and selecting edit CSS, or ctrl+shift+e. You should see a display like the one below.
Firefox edit css dialogue.

So onto the formatting, structure first, typically you want to play round with layout and get a feel for how it’s going to sit on the page which I tend to find starts giving you colour and font ideas. When I was doing mine I also changed all fonts to Verdana and started getting fonts the rough size. When doing the font sizes and structure I tend to use the named measures, eg small, xx-small. The advantage of this is it really shows proportions and if you’re not sure a quick trick I use to visualise is the ctrl+mouse scroll. This quickly ups and downs your font measure by one which is handly to see if you want a bigger or smaller font. This is where the edit css comes in really handy again. If you turn on view style information, you can quickly see the structural mark-up by moving your mouse over the page. Typing in the CSS updates as you go, so it’s a neat, cheap WYSIWYG editor with minimal overhead. If you CSS is rusty or you need a reference, try ILoveJackDaniel’s handy pdf cheat sheet. One thing though, don’t forget to save. From what I saw switching tabs would loose your edits.

Finally start thinking about colour and fonts, if you’ve been playing around with the structure you’ve probably already been doing this or got a colour scheme in mind or even drawn and coloured it out on a napkin. I used Adobe’s kuler to visualise colour themes.

You may during this notice the markup isn’t quite what you like, if you edit the page to be what you want in an ideal circumstance don’t forget to go back to the template and make the matching changes. It’s usually pretty easy to track things down. Refer to the WPDesigner tutorial if you’re struggling. I tend to work better with formatting etc so I mainly deleted stuff like the sidebar content and things I planned on growing later. XMLFriendsNetwork?, who needs it, I’ve got plenty of friends already? I’ve done quite a bit of xml stuff and if they’re still friends after all this time, good for them. I also did some minor formatting as I went to spacing, converting a few things to div’s and added some extra classes so I could identify them better.

Finally copy any css images back in the theme folder, create a screenshot called (not surprisingly) screenshot.png and upload to the themes folder in wordpress. Select it in wordpress and it should now be running. You’ll probably want to run it back through the w3c validator to make sure it’s all kosher, but if you’ve done it right there should be no changes and it will all work out happily.

I tend to use this approach for sites that have well formed content and structure and it’s pretty quick to get a site with a new design. The power of this approach is based on the strength of CSS and the more semantic the markup the easier it is to radically redesign a site in a matter of hours (Granted WP classic is pretty simple).

I point out I also only really cared about IE 7, firefox and browsers that could handle w3c CSS standards and for people likely interested in this blog along with 2/3 rds of the web I figure the majority are going to be just fine with this. Having actually done sites with graceful degradation all the way back to the 4 series and for something that I want to be fun I just couldn’t stand the pain.

So in wrap-up, this got my site up, running and uniquely mine pretty quickly. It’s not complete and hopefully like the content, will grow over time.

 

LOLCat Internet October 13, 2007

Filed under: development,internet,me — danielharrison @ 3:14 pm

TCP/IP version 6 is designed to allow squidillions (that’s the official measure mind you) of Internet addresses. Designed for scalability one of it’s features is to allow every conceivable device that wants to be connected, to be in fact connected. Numbers scale perfectly, need more room, sure no worries, we’re now 2 to 128 (squidillions) instead of 2 to 32 and now every one and thing can have it’s own static IP address and NAT falls by the wayside.

What’s obvious here is that human memory isn’t infinitely scalable so we don’t remember these things and prefer to not be slaves to the machines. This may of course change in the next 100 years and I go on the record now to endorse and welcome our new machine overlords. In the meantime, if we’re still loosing thinks like our keys, wedding rings, and yes, physical money in the 21st century then the humans aren’t the ones going to do the changing in the short term. So what’s the answer, words, a 5000 yr old tradition.

Have you ever played 20 questions? It’s possible under this simple game with a complete stranger with 20 questions, in most cases less, via simple yes and no answers to identify a common element. In fact it’s such an easy thing that computers are trying to get into the action. Mapping words, English predominantly, to numbers obviously is unscalable. There are inherent limitations when you remove context and on top of this we have a rigid hierarchy to which you have to adhere to. Ultimately governmental regulation means that humans with ideals and their machinations define what constitutes valid and invalid names. There have been attempts at alternative more liberal registries but support from vendors is somewhat inevitably lacking.

When I was researching my domain these regulations and then sub regulations in individual countries mean that on top of an inherently unscalable concept, restrictions like trademark law and local government restrictions meant that the limited pool has given rise to what I’m terming the lolcat Internet phenomenon.

If you’ve interested the rules are here : http://icanhascheezburger.com/how-to-makes-lol-pix/

I ended up with .com and may buy the .net to round these things out. I’m in Australia so buying the .com.au which would have been more appropriate but the rules conspire against it. It’s also much more expensive than .net and .com. The rules basically mean if I wanted a .com.au I would have needed to satisfy some quite restrictive rules. EG an ABN [Australian Business Number], Registered business name that maps to the domain … For more info see : http://www.domainnameregistration.com.au/rules.htm

I may start a business on this domain for some software ideas I have so a .com.au would have been appropriate. Under Australian rules I need to have all the above things just to even consider registering a name. An ABN is relatively easy to get (I do in fact already have one and yes I’m late with my BAS) and can be had online without having to talk to anyone. The remaining things start to require certain structures and other people, cosigners, company secretaries, trademarks, more registration; basically the establishment of a business. In this way the Australian system in my opinion acts to squash innovation and makes the .com.au artificially scarce hence more expensive. This tends to mean that the .com is the most appropriate and pretty much a catch all. I’d be interested to find out if this acted to quash startup establishment in Oz compared to somewhere more lenient like the US.

So what do we do while we wait for the singularity and numbers gain the supremacy they deserve? Well not much and as long as these things remain in bureaucracy evolved from the last century, .com and .net are about it really and we celebrate the LOLCat Internet.

The reason I write this of course is that I’ve decided to add my voice to the maelstrom and picking a domain that was both meaningful to me and hopefully readers was a tad trickier than expected. I ended up using http://instantdomainsearch.com/ which was invaluable.