Saturday, May 25, 2013

Git Bisect: Find the commit that broke my tests


So master is broken and you don't know which commit contains the offending code....

Git bisect is an amazing tool. It allows you to flag which commit was the last green state, and automatically iterate over each subsequent commit, running a script (i.e. your tests) until it finds the breaking commit....

i.e.

git bisect start [some-breaking-commit-sha] [last-good-commit-sha]
git bisect run [test-command]

a working example using rspec:

git bisect start 120cba5 c2d0a5e
git bisect run rspec spec/mytest_spec:123

Sunday, April 21, 2013

Git: patching diffs between branches


Recently I accidentally committed changes to master that I want to commit in a feature branch..... I made the mistake of

  1. branching master (to a feature branch) from my erroneous commit - then
  2. resetting (HARD) master to the previous commit

This axed all my history in my feature branch as the commit which I branched from in master no longer existed....

To fix I had to create a new feature branch from master, then create a patch of the differences between my original feature branch and master doing the following:

git checkout master
git checkout -b [new_feature_branch]
git diff --no-prefix origin/[busted_feature_branch] > my.patch

then apply the patch:

patch -p0 < my.patch

Saturday, April 13, 2013

Javascript: Safely reading a nested property


In some templating frameworks it can be really annoying reading a nested property of a JS object as it can of mean chaining a heap of null checks together....

for instance if I need to access a nested property 'to' in 'email.addresses.to' safely it means having to do something like:

if(email && (adresses = email.addresses)) {
  //print addresses.to
}

This is verbose and annoying. I needed a function that would return the nested value, or simply return an empty string if any property in the chain was null or undefined

i.e.

safeRead(email, 'addresses', 'to');

I also wanted property chains can be as long or short as I'd like.... ie.:

safeRead(my, 'very', 'deeply', 'nested', 'property');

The finished product:


Thursday, April 11, 2013

IE Ajax requests returning 401 Unauthorized in Rails / Sinatra

Here's a quick little nugget of info for any devs experiencing ajax issues in IE....

Firstly, earlier ( <= IE 8) versions of IE cache everything ajax, and it can be a pain to resolve without compromising (breaking through) server side cache.... I wrote an article here about that...

To add another drop to the ocean of pain that is IE, I found that on Windows 7 (and windows 7 only), IE7, IE8 and IE9, all AJAX requests were consistently returning 401 Unauthorized statuses. After much mining through code and system settings etc., a workmate and I discovered that in Windows 7, all ajax requests send an uppercase ACCEPT_LANGUAGE header, whereas regular synchronous requests send a lowercase one.....

This may seem inconsequential, but for those developing a rack based app using rack-protection, this is enough to trip the session-hijacking check, which compares this header with previous requests (https://github.com/rkh/rack-protection/blob/master/lib/rack/protection/session_hijacking.rb#L23) ...

As the case is different the equality check fails, resulting in rack-protection blocking the call and returning 401 Unauthorized.

Not a fun bug.

The solution is to either downcase the header client side for all ajax requests (i.e. $.ajaxSetup), or introduce some custom middleware before rack-protection that downcases the offending header before rack-protection checks it.

Tuesday, March 26, 2013

Backbone.JS and SEO: Google Ajax Crawling Scheme


Most search engines hate client side MVC, but luckily there's a few tools around to get your client side routes indexed.

As most web bots (i.e. Google and others) don't interpret javascript on the fly, they fail to parse   javascript rendered content for indexing. To overcome this, Google (and now Bing) support the 'Google Ajax Crawling Scheme' (https://developers.google.com/webmasters/ajax-crawling/docs/getting-started) - which basically states that IF you want js rendered DOM content to be indexed (i.e. rendering ajax call results), you must be able to:
  1. Trigger a page state (javascript rendering) via the url using hashbangs #! (i.e.http://www.mysite.com/#!my-state), and
  2. Be able to serve a rendered dom snapshot of your site AFTER javascript modification on request.
If using a client side MVC framework like Backbone.js, or simply have a javascript heavy page, and wish to get its various states indexed - you will need to provide this dom snapshotting service server side if you want your web app indexed. Typically this is done using a headless browser (i.e. QT, PhantomJS, Zombie.JS, HtmlUnit).

For those using ruby server side, there's a gem which already handles this called google_ajax_crawler available on rubygems.

gem install google_ajax_crawler


Its used as rack middleware and essentially intercepts a request made by a web bot adhering to the scheme, and scrapes your site server side then delivers the rendered dom back to the requesting bot as a snapshot.

A simple rack app example demonstrating how to configure and use the gem: