Writing maintainable unit tests

The project I am currently working on has about seven-hundered unit tests. On previous projects I have worked on, there have been as many as two-thousand tests. Unit tests are definitely not just for Christmas, they are for the lifetime of the code base! They have a purpose and that is to minmise bugs being checked into source control. Unit tests are also meant to act as documentation of the code. I have a rule that if I cannot understand what a particular unit test is trying to do in thirty seconds, then it needs to be refactored! Generally the tests that I cannot make sense of are due to noise around the test and the test class itself.

This blog post aims to introduce three guidelines to help improve the maintainability and readability of your unit tests, by

  1. altering the layout of your test classes
  2. reducing common noise
  3. and improving readability

I have introduced these three guidelines recently into my team and have found that the quality and understanding of each test has greatly improved, in particular when refactoring or fixing bugs in existing classes.


1) Structuring Unit Tests
When unit testing properly, most public methods within an application will have multiple unit tests associated to it. These tests should cover the different outputs to a method based usually on a different set of input data. Given a class under test with a lot of public methods, the test class can quite often become quite overloaded and quite difficult without close inspection to see what each test method is actually testing, i.e. the method it is testing and the scenarios for which it is under test.

With this in mind I recently read a post by Phil Haack detailing a solution he had come across to improve the readability of what each test class is actually doing. The solution is simple. For each public method you are testing, in your unit test class, create a nested class to encapsulate all the test methods for a given method. This is effectively grouping together tests by public method. This may seem like overkill at first, but to avoid repetition, you can define the setup and teardown methods in the base class. This means all the nested classes can inherit the base to reuse anything that should be shared.

The following example below shows the basic structure of a test class, which tests two public methods, the first called Get, the second Post.


    [TestClass]
    public class BlogPostTaskTests
    {
        protected Mock<IBlogPostRepository> blogPostRepository;

        [TestInitialize]
        public void Setup()
        {
            //shared setup, generally just initialising mocks
        }

        [TestClass]
        public class TheGetMethod : BlogPostTaskTests
        {
            [TestMethod]
            public void HappyPath()
            {
                //arrange
                //act
                //assert
            }

            [TestMethod]
            public void UnhappyPath()
            {
                //arrange
                //act
                //assert
            }
        }

        [TestClass]
        public class ThePostMethod : BlogPostTaskTests
        {
            [TestMethod]
            public void HappyPath()
            {
                //arrange
                //act
                //assert
            }

            //Additional Tests for the Post method here
        }

        [TestCleanup]
        public void Cleanup()
        {
        }
    }



2) Use the Test Data Builder Pattern for test data
Most unit tests require some input data for a method to test, and some output data to assert. I have found it too commonly the case that developers will duplicate instances of objects across multiple test classes, and initialise the object with hard coded test values. All this set up code within a test class adds a lot of noise to test classes which can make it difficult to understand what the test is doing.

Given the following class:

public class BlogPost
{
    public int Id { get; set; }

    public string Title { get; set; }

    public string Content { get; set; }
}

All to commonly you will find scattered across many test classes an instance of this class being initialised with test values to be used in a test.

var blogPost = new BlogPost
{
    Id = 1,
    Title = "My Title",
    Content = "My Content"
};

Following the DRY (Don’t Repeat Yourself) principle, I would recommend having a look at the test data builder pattern. The code below is an example of this for the BlogPost class. The class follows a number of conventions for consistency

  • Create a class for each domain object. The name of the class follows [DomainName]Builder
  • A private member variable for each property on the underlining class is added. This variable is assigned a default value.
  • To allow you to override a default value, provide a method with the name With[PropertyName]. This method follows a fluent style to allow you to chain methods together. Only add the With methods when you need them. 95% of the time the default value will probably suffice, there is no value in adding code that is not ever executed.
  • Add a Build method. This method will create and return a new instance of the object with either the default or overrided values.
public class BlogPostBuilder
{
    private int id = new Random().Next(1000);
    private string title = "My Title";
    private string content = "My Content";

    public BlogPostBuilder WithTitle(string title)
    {
        this.title = title;
        return this;
    }

    public BlogPost Build()
    {
        return new BlogPost
        {
            Id = id,
            Title = title,
            Content = content
        };
    }
}

An example of using the builder class

 var blogPost = new BlogPostBuilder()
                                    .WithTitle("Another Title")
                                    .Build();

Each unit test method should be responsible for initializing its own test data opposed to initializing the test data within the test setup. This makes it clear what data each unit test it utilising.


3) Use unit testing extension plugins to improve readability
I have recently been looking a very useful library called Fluent Assertions which provide extensions on top of unit testing frameworks such, with the aim of improving readability by following a more natural language, and reducing the amount of code that is required.

Fluent Assertions is effectively a set of extension methods on top of commonly used unit testing frameworks such as NUnit and MSTest.

As a simple example you could take the following Assert statement:

Assert.IsTrue(actualOutcome > 5);

and rewrite it as the follows:

actualOutcome .Should().BeGreaterOrEqualTo(5);

This is a simple example, but Fluent Assertions provides a wealth of extensions.

You can download Fluent Assertions via NuGet with the following command from the package manager console within Visual Studio.

INSTALL-PACKAGE FluentAssertions

and you can read about it here


I would recommend next time you write some tests, try following the three guideline above, I guarentee it will improve the maintainability and readability of your tests.

I would be very interested to hear any guidelines you follow to improve your unit tests. Thanks for reading.

Learning from mistakes with BDD

BDD (Behaviour Driven Design) is a great agile technique that adds value towards the three main steps of software development, defining, coding and testing.

This hit list hopefully should provide some guidance for introducing or just improving BDD in your organisation. The list is spawned from my own experience introducing BDD as a new concept to my team over the last few months. It is in no particular order.

  1. Don’t rush into automation.
    Undoubtably one of key outputs of BDD is an automated set of tests covering the scenarios raised against a story. However it is very easy to jump straight into the deep end. It’s not all about the tool and automated tests can be a maintenance nightmare. Get a proper framework in place, use the right patterns, such as the page object pattern, particularly if you’re writing your tests from the UI layer. Trust me this will save you a lot of rework.
  2. Don’t spend hours arguing about the correct language to use. Try and land a few ground rules early, decide on terms to refer to elements of your system by and stick to it. Consult your domain experts. It is important to follow a ubiquitous language, so everyone is on the same page. Make sure everyone understands that the given, sets the context, when is an event and the then is an observable result.
  3. See what others are doing. You didn’t invent BDD, Dan North did! Do some reading, there are many different people with many different views on the internet, read other peoples opinion and cut out and stick it in your scrap book, what you see as the points that will work for you and your team.
  4. Write scenarios as a team. Don’t get one person to write them it’s not about one persons view, include the BA’s, testers and dev’s. Scenario writing is a team sport.
  5. Have the conversation. the best thing about writing or even discussing scenarios is that it gets you thinking off the happy path track. Get the whole scrum team in a room and just talk through the requirements. As you start discussing a story in more detail, you’ll continuously hear the question raised, ah but what if I do this… or what if I do that. Ensure that this is also not just on the first day of the sprint. Put meetings in the calendar for regular review meetings.
  6. Don’t add implementation details in scenarios. Avoid tying scenarios to the UI by referencing page elements or controls. Avoid tying scenarios to the architecture or technologies used under the cover. Everything that occurs under the UI cover is up to the architect and developers to design.
  7. Add tests to continuous integration process as early as possible. When you do automate, get the tests running as part of the build as soon as possible, this keeps the test alive. The key to automated tests is maintenance. Making them part of the build process means you will know when they fail fast.
  8. Use your scenarios. You’ve put the time in to write the scenarios, use them! Make sure they are held centrally under source control. Use them to develop against and particularly use them as test cases. Testers should be adding additional scenarios during the course of a sprint.
  9. Include the SME (Subject Matter Expert), domain expert and customer Even if just to replay the scenarios against. These guys understand the problem domain more than anyone. In particular the customer is the one who will use the software.
  10. Keep scenarios precise It has often been the case I have seen very long winded scenarios that have tried to capture multiple scenarios in one. No one wants to read an 20-30 line scenario. Keep them small and precise. Each scenario should cover a single case. Include examples for clarity.
  11. Use examples to reinforce the scenario Examples bring scenarios to life! They make them dynamic and they provide test data to verify that the scenario meets it’s goal. Adding scenarios means that not only have you written your test cases upfront, you’ve also included the data by which to test it with.
  12. Every scenario is negotiable and is subject to change at anytime. With the information available on one day a scenario might seem to be capturing the requirement. However additional information or even the realisation that the scenario do not cover all cases may lead to you and your team to revisit the scenarios and add, edit or even delete scenarios. If you do not change a suite of scenarios for a given story after they are first conceived, you are almost certainly doing something wrong. These changes may well also mean that acceptance criteria or even stories are subject to change too.
  13. Your scenarios are your living documentation Make this available to the whole business, it’s also a great resource for your support team. If you do use tools like spec flow, the tools more often than not provides the ability to generate reports. Get these executed as part of your build and make accessible for everyone.
  14. Make things visual. Using techniques such as story maps and including wireframes are a great way to visualise the requirements, and can be a great driver for conversation. Stick them on a wall so everyone can see them.
  15. Sign off scenarios. A story can only be considered complete when all your scenarios have been implemented. At the end of sprint review meeting, when demoing whats been implemented in the sprint, replay the functionality to the team through the scenarios

For me agile is about making the invisible visible. What I mean by this is that agile is all about working as a team to deliver software. The key point to this is that the software is built collaboratively as a unit, with input from the business, customers, dev’s, BA’s testers. It’s about having the conversation, constantly reviewing, quick feedback and sharing the knowledge. It’s always worth remembering the agile manifesto, which gets this bang on in four simple steps!

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

I would love to hear any comments from your own experiences implementing BDD within your organisation. Thanks for reading.

agile techniques – Dev Box Testing

Dev box testing is a practice all teams should be doing. The idea is simple. When a developer writes a piece of code, which may have been to fix a bug, or implement a task within a story, just before the code is committed to the source control repository, the developer should ask the assigned tester and BA to come and sit at the developer’s machine to demo the changes that have been made. Hence the name dev box testing.

The steps involved in dev box testing can be summed up with the following:

  • The developer should demonstrate what he/she has done and run through the changes to the application that has been made.
  • The developer should let the BA or tester run the application and perform at minimum happy path testing.
  • This provides an opportunity for the BA to ensure that what has been produced meets the requirement.
  • This allows the testers to understand what has been developed. The developer should share with the tester information about how to test the code change by means other than just the UI, i.e. any database changes, service calls etc.
  • If any issues are found, the developer will fix them, and get them retested before anything is checked in.
    • Dev box testing commonly should not take any more than 10 minutes. It’s real value is that it allows for code to fail fast, which although that may seem a bit dramatic, is actually a really good thing! For example, even if you have a fantastically slick continuous integration process in place, the code still needs to be committed to source control, unit tests run and then deployed to a test environment before a tester and the BA can access the code change. This means there is always a delay in the code change being available to test, once it has been checked in. If a defect is found, such as a bug, a piece of functionality is implemented incorrectly or not at all, a vicious cycle begins. This is outlined in the diagram below.

      typcial flow before dev box testing

      Dev box testing helps by minimising the feedback cycle in terms of time. This is because code is tested before it is checked in. Any defects or requirements not implemented can be picked up and dealt with immediately. This also greatly improves the sharing of knowledge held by the developer with the testers. This knowledge sharing allows the tester to truly understand how the changes were implemented and understand how the application can be tested under the covers, away from just UI testing. Sharing this knowledge whilst it’s still hot, fresh in the developers mind is essential.

      The diagram below outlines a typical dev box testing flow. The key point to understand is that after the developer initially writes the code, the flow moves into an iterative cycle, between having the code dev box tested and the developer on the back of that coding any changes. This cycle repeats until the BA and tester are happy with the change made and the code can be committed to source control.

      dev box testing flow

      Dev box testing strongly encourages collaboration between the BA, developer and tester because it forces conversation, which is key to the success of successful software projects. It is important to point out that dev box testing is not about performing a full regression test, but more of a sanity check to ensure no obvious errors get deployed.

      Dev box testing is a great tool in the agile armoury that every team should employ. Any technique that aims to reduce bugs that are committed into source control, and actively encourages collaboration between team members has to be a good thing. There is no reason why you can’t start today!