This post is over 6 months old. Some details, especially technical, may have changed.

A Tale of Unit Testing Technologies

I've had a big bad dirty secret for a while now.  One I've kept hidden from my friend and colleagues for years.  But now its time to come clean...... here goes..... hold onto your pants.

<pause for dramatic effect>
I hardly ever write unit tests!
<gasps>

I've come clean and it feels good.  Another reason why it feels good is that it is slowly but surely becoming a falsehood.  I've had my eyes opened recently to the art of BDD and discovered that I've simply been doing it wrong - I was so busy writing code first and then tests to fit my, possibly incorrect, assumptions that tests failed to hold any worthwhile value — time wasted in my mind.  Also when writing said tests I was too worried on the internal mechanics of the "unit" being tested and not about its intended behaviour — essentially missing the point and time wasted once more.  Being a man that has no time for, well, time wasting I feel I have come full circle on why I didn't test.  In the past at least....

But I digress.... Where was I?  Ah yes I've had my eyes opened - I've been doing it right.  Yep I've gone all TDD/BDD - write a failing test, write some code to make it work, make the test fail again and so on and so forth.  Its been a good experience.  I finally see the benefit in it.  Yes, it takes slightly longer and it's hard to break old habits but the amount of times I've been tripped up by assumptions and edge cases that would only normally be found during system testing after a frustrating debugging session with plenty of hair pulling and swearing.

Since the whole behaviour driven enlightenment I've been trying to find a suitable set of technologies that allow me to perfect my new found approach and turn me into a fast and effective code cutting machine.  NUnit is fine — it does the job but it's a task in itself to express what you want out of a test which slows things down.  No if you're going to do it right you want the best tools.  So I have decided to jump in head first and discover what testing tools are the most effective.  I've been through a few already , including,

  • SpecFlow (C#)
  • MSpec (Machine.Specifications) (C#) 
  • JSpec  (JavaScript)
  • Should/Should.Fluent (C#)

Here are a few of my thoughts so far. 

SpecFlow (http://www.specflow.org/)

SpecFlow is essentially the .NET equivalent of Cucumber (from the Ruby world).  Offering natural language syntax for defining scenarios that make up the behaviour of a feature.  Its better as an example,

Feature: Tags

Scenario: Normalise Tag Name
	Given I have created a new tag
	When I set its name to "New Test Tag"
	Then it should have a normalised name of "newtesttag"

So you see the test here is written in pretty much straight english.  It's easy to understand and this means even domain experts (aka "the business folk") can help write them.  So how does this end up being turned into executable tests.  Well, initially you need to do a bit of wiring up using Step Definitions.

[Binding]
public class TagSteps
{
    Model.Tag _tag;

    [Given(@"I have created a new tag")]
    public void GivenIHaveCreatedANewTag()
    {
        _tag = new Model.Tag();
    }

    [When(@"I set its name to ""(.*)""")]
    public void WhenISetItsNameToNewTestTag(string name)
    {
        _tag.Name = name;
    }

    [Then(@"it should have a normalised name of ""(.*)""")]
    public void ThenItShouldHaveANormalisedNameOfNewtestag(string normalisedname)
    {
        _tag.NormalisedName.Should().Equal(normalisedname);
    }
}

As you can see we can match up each line of the feature file with an appropriate executable action.  We even have the power of regular expressions at our disposal - allowing us to reuse steps and build up a decent library that comes with us across projects.  Some very very powerful stuff here — we can get everyone involved in writing these things,

  • Developers and business users during design
  • Developers during development
  • Testers during system test — in fact why not write a feature as the "Steps to reproduce"?  Win, win!

The thing is...... well.... Technically speaking SpecFlow isn't really aimed at the fine grained, unit test level of testing.  It's more about the higher level integration testing.  I mean it's useable but it does become a bit awkward to express things in a reusable manner.  The problem with that is things start feeling quite heavy weight and you need to do a lot of extra work to get them to fit.  No I think, while SpecFlow is great for many things (automated testing, integration testing, system testing etc.) it's not the best fit for what I am looking for in this article - unit testing tools.

MSpec (https://github.com/machine/machine.specifications)

Machine.Specifications (MSpec for short) is a Context/Specification framework geared towards removing language noise and simplifying tests. 

Thats the official intention and I must admit I was initially taken by it.  Rather than having a single huge class filled with methods representing tests MSpec takes the approach that a single class represents a single scenario and uses lambda expressions to offer the BDD style syntax (Because/It/Subject etc.).  So lets take the Tag scenario described above and convert it to MSpec format,

[Subject("Normalise Tag Name")]
public class when_a_tag_is_created
{
    static Tag _tag;

    Establish context = () =>
        _tag = new Tag();

    Because of = () =>
        _tag.Name = "New Test Tag";

    It should_have_a_normalised_name_of_newtesttag = () =>
        _tag.NormalisedName.ShouldEqual("newtesttag");
}

It's quite simple and it really forces you to ensure your tests are as simple as possible.  To be honest having used it on a project I'm not 100% sold.  I think its down to my personal coding style.  I honestly think while it reduces line count it doesn't necessarily reduce language noise.  Also having used on a project I found it quite awkward to write — the style is different to what I am used to and I guess with training that would change.  As I say this is a personal opinion and I am happy to be shown a better approach.

JSpec (http://visionmedia.github.com/jspec/)

JSpec is a clever little JavaScript testing framework.  I's no longer supported by it's creators (booooo!) but honestly its stable enough to use anyway.  JSpec takes the RSpec DSL (a decent subset at least) and is capable of converting it to JavaScript for execution in the browser.  It's got a heap of stuff in it,

  • Mock Ajax
  • Stubbing
  • Fake timers

Lets take a look at a simple example,

describe "Utils.Arrays.compare method"
    it "should compare and sort 2 numeric arrays successfully"
        arr1 = [1,2,3]
        arr2 = [3,1,2]

        Utils.Arrays.compare(arr1, arr2, true).should.be true
    end

    it "should compare 2 numeric arrays unsuccesfully when not sorted"
        arr1 = [1,2,3]
        arr2 = [3,1,2]

        Utils.Arrays.compare(arr1, arr2).should.be false
        Utils.Arrays.compare(arr1, arr2, false).should.be false
    end
end

People who have used RSpec before will feel right at home.  People who haven't should be able to understand exactly what is going on.  Pretty - right?  I thought so.  For the people who think — "ugh we don't need another dialect/language" - wise up!  Out of all the testing techs I've used recently this one has been the most successful and the output it generates is nice and clean.

Should/Should.Fluent (http://should.codeplex.com/)

This little gem isn't a framework in itself and can be used with any framework you care to use.  Should provides a more expressive way of stating assertions in your code by making the code closer to natural language using extensions methods and nicer method names.  Example I hear you say?  Why certainly sirs and madams,

public void Should_fluent_assertions()
{
    object obj = null;
    obj.Should().Be.Null();

    obj = new object();
    obj.Should().Be.OfType(typeof(object));
    obj.Should().Equal(obj);
    obj.Should().Not.Be.Null();
    obj.Should().Not.Be.SameAs(new object());
    obj.Should().Not.Be.OfType<string>();
    obj.Should().Not.Equal("foo");

    obj = "x";
    obj.Should().Not.Be.InRange("y", "z");
    obj.Should().Be.InRange("a", "z");
    obj.Should().Be.SameAs("x");

    "This String".Should().Contain("This");
    "This String".Should().Not.Be.Empty();
    "This String".Should().Not.Contain("foobar");

    false.Should().Be.False();
    true.Should().Be.True();

    var list = new List<object>();
    list.Should().Count.Zero();
    list.Should().Not.Contain.Item(new object());

    var item = new object();
    list.Add(item);
    list.Should().Not.Be.Empty();
    list.Should().Contain.Item(item);
};

I stole this one from the Should Codeplex site (linked above) and it makes use of the fluent syntax (optional).  OK technically it doesn't do much but it really helps when trying to express assertions in your tests.  Highly recommend this one.

Conclusion

Thats the first lot of technologies covered but there are still plenty out there.  I think my next port of call is to actually spin up IronRuby and get RSpec involved.  After all it is the marker by which I am comparing these things so why did I not jump on it first of all?  Perhaps I like to build up suspense :-P

Also worth pointing out is that I haven't touched upon technologies for mocking and stubbing - that is for another time.

As always heap criticism my way and I'll happily fight my corner and stubbornly refuse to back down :-P (second smilie within a few paragraphs time to end this post).

UPDATE:  I've pushed some of my code to GitHub and intend to expand on this using the various other technologies.  The repository can be found on my repo on GitHub

Published in Testing .NET JavaScript on April 03, 2011