Tuesday, February 25, 2014

Why you shouldn't choose Spine.JS

This is really for my own benefit... something to remind me of the reasons why I shouldn't ever choose Spine.JS over other client side MVC frameworks. Shoot me a comment if you'd like to add to this list.... In order of importance:

  1. Its Backbone's red headed stepchild. Just chose Backbone
  2. Their AJAX mixin to models sucks. It relies on events thrown from globally accessible objects, rather than callbacks and / or promises to complete async actions. This introduces a layer of indirection, and potentially messy code. In small apps, this could be acceptable, but not for larger apps. 
  3. Configuring URLs to retrieve model information is strongly opinionated, and difficult to override. REST principles are enforced, an any deviation from these are problematic. Should a model be loadable from multiple urls (i.e. Person model list loaded from http://my-app/friends, and http://my-app/friends/1/friends), there is no easy way to implement this contextual switching without implementing your own ajax module.
  4. Its collection helpers (i.e. find, all, fetch, select etc.) clone models before returning, which can be very cumbersome as the model you have retrieved is not the same as the one you set. Adding models to the cache is also annoying, as a call to save() is required. However, in calling this, a bunch of logic gets executed that you might not want to be executed, such as the assigning of an id, or throwing of 'create', 'save' events etc. Quite often you just want your models to be stored in an accessible collection in the state that they are in. Its probably better to do this in your own managed collection, rather than via Spine's cache. So really - whats the point of them?
  5. Due to this cloning, the models are cumbersome. Every property added to a model must be declared as an attribute or configured.
  6. It uses global collection helpers, and caches (i.e. MyModel.all). This sounds wonderful, and on small apps its handy. But for a sizeable app, this can get nasty as if the developer is not careful it implicitly introduces an unwanted level of state into your application.


Saturday, May 25, 2013

Git Bisect: Find the commit that broke my tests


So master is broken and you don't know which commit contains the offending code....

Git bisect is an amazing tool. It allows you to flag which commit was the last green state, and automatically iterate over each subsequent commit, running a script (i.e. your tests) until it finds the breaking commit....

i.e.

git bisect start [some-breaking-commit-sha] [last-good-commit-sha]
git bisect run [test-command]

a working example using rspec:

git bisect start 120cba5 c2d0a5e
git bisect run rspec spec/mytest_spec:123

Sunday, April 21, 2013

Git: patching diffs between branches


Recently I accidentally committed changes to master that I want to commit in a feature branch..... I made the mistake of

  1. branching master (to a feature branch) from my erroneous commit - then
  2. resetting (HARD) master to the previous commit

This axed all my history in my feature branch as the commit which I branched from in master no longer existed....

To fix I had to create a new feature branch from master, then create a patch of the differences between my original feature branch and master doing the following:

git checkout master
git checkout -b [new_feature_branch]
git diff --no-prefix origin/[busted_feature_branch] > my.patch

then apply the patch:

patch -p0 < my.patch

Saturday, April 13, 2013

Javascript: Safely reading a nested property


In some templating frameworks it can be really annoying reading a nested property of a JS object as it can of mean chaining a heap of null checks together....

for instance if I need to access a nested property 'to' in 'email.addresses.to' safely it means having to do something like:

if(email && (adresses = email.addresses)) {
  //print addresses.to
}

This is verbose and annoying. I needed a function that would return the nested value, or simply return an empty string if any property in the chain was null or undefined

i.e.

safeRead(email, 'addresses', 'to');

I also wanted property chains can be as long or short as I'd like.... ie.:

safeRead(my, 'very', 'deeply', 'nested', 'property');

The finished product:


Thursday, April 11, 2013

IE Ajax requests returning 401 Unauthorized in Rails / Sinatra

Here's a quick little nugget of info for any devs experiencing ajax issues in IE....

Firstly, earlier ( <= IE 8) versions of IE cache everything ajax, and it can be a pain to resolve without compromising (breaking through) server side cache.... I wrote an article here about that...

To add another drop to the ocean of pain that is IE, I found that on Windows 7 (and windows 7 only), IE7, IE8 and IE9, all AJAX requests were consistently returning 401 Unauthorized statuses. After much mining through code and system settings etc., a workmate and I discovered that in Windows 7, all ajax requests send an uppercase ACCEPT_LANGUAGE header, whereas regular synchronous requests send a lowercase one.....

This may seem inconsequential, but for those developing a rack based app using rack-protection, this is enough to trip the session-hijacking check, which compares this header with previous requests (https://github.com/rkh/rack-protection/blob/master/lib/rack/protection/session_hijacking.rb#L23) ...

As the case is different the equality check fails, resulting in rack-protection blocking the call and returning 401 Unauthorized.

Not a fun bug.

The solution is to either downcase the header client side for all ajax requests (i.e. $.ajaxSetup), or introduce some custom middleware before rack-protection that downcases the offending header before rack-protection checks it.

Tuesday, March 26, 2013

Backbone.JS and SEO: Google Ajax Crawling Scheme


Most search engines hate client side MVC, but luckily there's a few tools around to get your client side routes indexed.

As most web bots (i.e. Google and others) don't interpret javascript on the fly, they fail to parse   javascript rendered content for indexing. To overcome this, Google (and now Bing) support the 'Google Ajax Crawling Scheme' (https://developers.google.com/webmasters/ajax-crawling/docs/getting-started) - which basically states that IF you want js rendered DOM content to be indexed (i.e. rendering ajax call results), you must be able to:
  1. Trigger a page state (javascript rendering) via the url using hashbangs #! (i.e.http://www.mysite.com/#!my-state), and
  2. Be able to serve a rendered dom snapshot of your site AFTER javascript modification on request.
If using a client side MVC framework like Backbone.js, or simply have a javascript heavy page, and wish to get its various states indexed - you will need to provide this dom snapshotting service server side if you want your web app indexed. Typically this is done using a headless browser (i.e. QT, PhantomJS, Zombie.JS, HtmlUnit).

For those using ruby server side, there's a gem which already handles this called google_ajax_crawler available on rubygems.

gem install google_ajax_crawler


Its used as rack middleware and essentially intercepts a request made by a web bot adhering to the scheme, and scrapes your site server side then delivers the rendered dom back to the requesting bot as a snapshot.

A simple rack app example demonstrating how to configure and use the gem:

Wednesday, December 12, 2012

Sinatra Asset Snack: Coffeescript and SASS compilation for Sinatra

Up until recently most of my RIAs have been built using Backbone.JS and Sinatra, using the Sinatra Assetpack gem to handle asset compilation and pipelining. Unfortunately recently I found some performance issues with Sinatra Assetpack.

Generally speaking its great at managing coffeescript and SASS compilation and minification on the fly, however I was finding that as my codebase grew, each time I fired up a server in development, it was taking way to long to clear its cache, recompile and load a page. This was a real drag when working on UX, as this recompilation time made development slow and clunky. I was also finding that even after warming its asset cache, the serving of these assets via assetpack on development and test environments was really preventing quick page loads and was starting to become annoying.

In response, I wrote a simple gem to slim down the asset serving codebase, and handle runtime compilation of coffeescript and SASS in a faster fashion. Its released on RubyGems:

gem install sinatra-asset-snack

At the moment it handles only SASS and Coffeescript compilation, and allows you to designate script bundling into common files (i.e. application.js). For example:

Minification isn't handled yet, mainly as most sites (should) use g-zip compression anyway which means minification is largely a secondary / unnecessary concern anyway.

Should anyone want to write any additional compilers for other syntaxes feel free! The code can be found at https://github.com/benkitzelman/sinatra-asset-snack