I have moved my active blog over to tumblr. I've maintained this blog for reference but will be posting to http://www.robustsoftware.co.uk instead. I've pointed my Feedburner feed to tumblr so if you're subscribed already you should already have switched with me.

A better ActionResult: Open Rasta edition (part 2)

Sebastien Lambla, who created Open Rasta, mentioned that I could use an operation interceptor rather than using a pipeline contributor as in the first “A better ActionResult” post.

I must admit that I didn’t know about operation interceptors which is the main reason I didn’t use one in the first post. I took this as an opportunity to learn a bit more about Open Rasta and so looked into what operation interceptors are and how to create them.

Creating an operation interceptor is an unsurprisingly simple task. You can either implement the interface IOperationInterceptor or inherit from OperationInterceptor which implements IOperationInterceptor with virtual methods that have no effect on the operation.

The latter is the easiest thing to do so that’s what how we’re going to implement our interceptor.

The IOperationInterceptor has 3 methods: BeforeExecute, RewriteOperation and AfterExecute. We want to work with the result of the operation so we’ll override AfterExecute. The way the interceptor works once invoked is virtually identical to the pipeline contributor, bar a little refactoring, as you can see:

public class CommandOperationResultInterceptor : OperationInterceptor

{

    readonly IDependencyResolver resolver;

 

    public CommandOperationResultInterceptor(IDependencyResolver resolver)

    {

        this.resolver = resolver;

    }

 

    public override bool AfterExecute(IOperation operation,

                                      IEnumerable<OutputMember> outputMembers)

    {

        var outputMember = outputMembers.FirstOrDefault();

        if (outputMember == null) return true;

 

        var command = outputMember.Value as CommandOperationResult;

        if (command == null) return true;

 

        outputMember.Value = ProcessCommand(command);

 

        return true;

    }

 

    object ProcessCommand(CommandOperationResult command)

    {

        resolver.AddDependencyInstance(command.GetType(),

                        command, DependencyLifetime.PerRequest);

 

        var commandHandlerType = typeof(CommandOperationResultHandler<>)

            .MakeGenericType(command.GetType());

 

        var commandHandler = (ICommandOperationResultHandler)

            resolver.Resolve(commandHandlerType);

 

        return commandHandler.Execute();

    }

}

This is a direct replacement for the pipeline contributor. It uses the exact same commands and command handlers as the pipeline contributor so the only other difference is that we have to register the operation interceptor rather than the pipeline contributor:

ResourceSpace.Uses.CustomDependency<IOperationInterceptor,

    CommandOperationResultInterceptor>(DependencyLifetime.Transient);

That’s all there is to it. Really simple given we had the code for the pipeline contributor already.

The only difference between the pipeline contributor and the operation interceptor is that rather than dealing with a single operation result there could be multiple output members. From what I can gather there is only ever one output member which is the assumption I’ve embedded in the interceptor but I may be mistaken.

Reflecting on the two approaches, I don’t think there’s much to chose between the two. I prefer the original method of using a pipeline contributor. It is more visible via the debug output of the pipeline and it is easier to retrieve the operation result at that point. However, knowing that operation interceptors exist and how to use them is beneficial. Another weapon in my Open Rasta armoury.

A better ActionResult: Open Rasta edition

I’ve been meaning to blog more about my experiences with Open Rasta but haven’t had a sufficiently focused topic to work with so far. However, whilst reading Jimmy Bogard’s post on a better ActionResult I thought doing similar would be equally possible with Open Rasta. Go off and read Jimmy’s post if you haven’t already because I’m going to assume you have.

First, some things to know about Open Rasta. It handles requests by passing them through a pipeline. Now this isn’t some mythical process that you have to spend hours working out as with MVC. It is instead made up of pipeline contributors which you register as part of your configuration. To help you debug Open Rasta prints out all the contributors in the order they’ll be executed for you during initialisation. Another difference in Open Rasta is what things are called, controllers are handlers, actions are operations, action results are operation results and models are resources. Their function is similar enough to require no further explanation in the context of this post.

Open Rasta gives you about 15 contributors to form the basis of request handling which you can replace or add to as you see fit. What we will do in order to emulate Jimmy’s code is create a new contributor and insert it into the pipeline. These are pretty simple to create and register, here’s all the code for our custom pipeline contributor:

public class CommandOperationResultInvokerContributor : IPipelineContributor

{

    readonly IDependencyResolver resolver;

 

    public CommandOperationResultInvokerContributor(IDependencyResolver resolver)

    {

        this.resolver = resolver;

    }

 

    public void Initialize(IPipeline pipelineRunner)

    {

        pipelineRunner.Notify(ExecuteCommand)

            .After<KnownStages.IOperationExecution>()

            .And.Before<KnownStages.IOperationResultInvocation>();

    }

 

    PipelineContinuation ExecuteCommand(ICommunicationContext context)

    {

        if (!(context.OperationResult is CommandOperationResult))

        {

            return PipelineContinuation.Continue;

        }

 

        var commandOperationResultType = context.OperationResult.GetType();

 

        resolver.AddDependencyInstance(commandOperationResultType,

            context.OperationResult, DependencyLifetime.PerRequest);

 

        var commandHandlerType = typeof(CommandOperationResultHandler<>)

            .MakeGenericType(commandOperationResultType);

 

        var commandHandler = (ICommandOperationResultHandler)

            resolver.Resolve(commandHandlerType);

 

        context.OperationResult = commandHandler.Execute();

 

        return PipelineContinuation.Continue;

    }

}

The Initialize method is called during initialisation so that Open Rasta can determine where in the pipeline you want your contributor to be invoked. The KnownStages class contains a bunch of interface aliases for significant stages in the standard pipeline allowing you to latch onto them without tying you to an implementation.

KnownStages.IOperationExecution is the Open Rasta equivalent of invoking the controller action and KnownStages.IOperationResultInvocation is the the equivalent of executing the ActionResult produced by the action. We’re slipping in between these two steps so we can execute our command handlers when needed; allowing us to change the OperationResult before it is executed.

So what do we actually do when this pipeline contributor gets invoked?

First we check if the OperationResult attached to the ICommunicationContext is of the CommandOperationResult type. This is identical to checking whether the ActionResult is of the BetterActionResult type in Jimmy’s post. If it isn’t we exit our pipeline contributor, telling Open Rasta to carry on to the next stage in the pipeline.

public abstract class CommandOperationResult : OperationResult

{

}

If it is a CommandOperationResult we retrieve the type of the current OperationResult and use it to register the OperationResult with our dependency resolver for the lifetime of the current request. The reason we do this is to be able to take the OperationResult as a dependency of the command handler. I’m not sure if this is any better but it does avoid using reflection to execute the command handler.

As a side note, Open Rasta uses an IDependencyResolver interface which is used throughout the codebase. It ships with it’s own implementation; I know there’s an implementation for Ninject and I think there’s one for StructureMap knocking around somewhere. I’ve only used the internal implementation as it does everything I’ve needed so far.

We then work out the type of CommandOperationResultHandler we’ll need to handle this command and use it to retrieve an instance from the dependency resolver. Again, this is very similar to Jimmy’s code apart from the fact that we cast the result as an ICommandOperationResultHandler. This is the second of the things that lets us avoid reflection.

public interface ICommandOperationResultHandler

{

    OperationResult Execute();

}

 

public abstract class CommandOperationResultHandler<T>

    : ICommandOperationResultHandler

{

    protected readonly T command;

 

    protected CommandOperationResultHandler(T command)

    {

        this.command = command;

    }

 

    public abstract OperationResult Execute();

}

We execute the command handler, which will have taken the CommandOperationResult as a dependency due to us registering it earlier. It could also take the ICommunicationContext as a dependency as Open Rasta registers that for you, as it does for IRequest, IResponse and the other things that make up ICommunicationContext.

The result of executing the command handler is set as the OperationResult of the current ICommunicationContext, this encourages you to swap out the CommandOperationResult for one of the standard OperationResult objects which will then get handled as normal by the next stage in the pipeline, KnownStages.IOperationResultInvocation.

Here’s my equivalent of Jimmy’s DeleteRequestResult and DeleteRequestResultInvoker:

public class DeleteCommand<T> : CommandOperationResult

{

    public DeleteCommand(T resource)

    {

        Resource = resource;

    }

 

    public T Resource { get; private set; }

}

 

public class DeleteCommandHandler<T>

    : CommandOperationResultHandler<DeleteCommand<T>>

{

    readonly ISession session;

    readonly ILogger logger;

 

    public DeleteCommandHandler(

        DeleteCommand<T> command, ISession session, ILogger logger)

        : base(command)

    {

        this.session = session;

        this.logger = logger;

    }

 

    public override OperationResult Execute()

    {

        session.Delete(command.Resource);

        logger.WriteInfo("Deleted " + command.Resource);

 

        return new OperationResult.SeeOther

                   {

                       RedirectLocation = command.RedirectLocation

                   };

    }

}

This is how you would issue the delete command from your handler:

public class ResourceHandler

{

    readonly ISession session;

 

    public ResourceHandler(ISession session)

    {

        this.session = session;

    }

 

    public OperationResult Delete(int id)

    {

        var resource = session.Get<Resource>(id);

 

        return new DeleteCommand<Resource>(resource)

                   {

                       RedirectLocation = typeof(HomeResource).CreateUri()

                   };

    }

}

Last thing we need to do is register our pipeline contributor and command handler so they will be used.

ResourceSpace.Uses.PipelineContributor<CommandOperationResultInvokerContributor>();

ResourceSpace.Uses.CustomDependency<

    CommandOperationResultHandler<DeleteCommand<Resource>>,

    DeleteCommandHandler<Resource>>(DependencyLifetime.Transient);

That’s all there is to it.

Personally I prefer how this works in Open Rasta but then I would say that. I like how the result is manipulated by adding a step to the pipeline rather than having to override behaviour as in MVC. It just seems tidier and more flexible. The other differences in implementation could be transferred and are probably a matter of personal preference.

Which do you think is better?

CQRS: Crack for architecture addicts

I’m getting a bad feeling about yet another high-brow architecture. CQRS is a complex solution to a complex problem.

*NEWSFLASH*

Your problem is not complex enough to warrant the overhead of a complex solution

For the 1% of people who can rightly say “but my problem is complex enough” ask yourselves this: is it really that complex? I mean really.

Be honest now. Are you jumping at the latest architecture all the cool kids are talking about? Do you have twenty message buses passing data around because your intranet application might need to scale to millions of users one day? If you do, you probably don’t need CQRS for technical reasons but because you’re an architecture addict.

If you’ve got this far you either really do need to use CQRS or you have serious problem. So ask yourself this, you’re probably an architect or senior developer. Can the rest of your team fully understand the directions you’ll be giving from your ivory tower? If half of them can’t you are choosing the wrong architecture. I don’t care if it fits your problem perfectly. If your team can’t handle it, it’s the wrong choice.

If you still think CQRS is the right solution then you are in a very select group. You have a complex domain, scaling is a big problem for you and you have a team capable of taking the burden of a complex solution. Are you hiring?

That or you’re completely deluded.

Or you are a SOA consultant that is selling CQRS as the silver bullet for all development problems.

And I hate you.

What is BDD (Behaviour Driven Design)?

“What is BDD?” is a question that’s been doing the rounds lately within the altnetgroup and devbookclub. There’s a great deal of mysticism surrounding it as if it were some exclusive members club. I’m going to slay some of the myths and tell you what BDD really is.

Myth 1: BDD requires a framework or tool

You don’t need to use MSpec, RSpec, Cucumber, etc to practice BDD. Sure they can help but they are not a precursor to being a practitioner of BDD. You don’t even have to write your tests within a certain construct such as Given-When-Then.

Myth 2: BDD requires UAT

User Acceptance Testing is a great concept but is one of those practices that only the elite working with an exceptional client get to do in real life. If you have UAT as part of your process, all power to you, but that’s all it is, part of your process. It is not part of BDD. It is merely coincidental that BDD style tests are the best way to express your acceptance tests.

Myth 3: BDD has to be done top-down

It is easier to approach BDD in a top down manner. But much like its easier to put your underwear on first, that doesn’t work for everyone in every situation. Just ask Superman. The proponents of BDD often state things in a radical fashion due to a fear of BDD being thrown in the same pot as TDD. That doesn’t mean that every time they say “you must” they actually mean it.

So what is BDD?

At its core, BDD is TDD done with a specific mindset; testing the intent of the system rather than testing a particular piece of code. The difference is subtle but the effects are large. There’s an entire discipline dedicated to the power of semantics after all.

It is the difference between “when I add a new post to my blog, the new post shows up on my homepage” and “calling the create new post method on the blog controller saves a new post and the new post is passed to the homepage view when the home controller’s index action is called”. Both sentences describe the same code, the BDD style signals the intent and is decoupled from the implementation. The non-BDD style describes what the code is doing almost line for line. The decoupling of the tests from the implementation is where the largest benefit of BDD comes from. Your tests will be less brittle making refactoring easier, perhaps even fun, making your code more malleable.

The myths aren’t completely unfounded, using constructs such as Given-When-Then helps enforce the required mindset. Frameworks such as MSpec, RSpec, Cucumber or even a bespoke one help you write tests within that construct. Working from the top-down helps you work from the behaviour of the user interface down to the behaviour of the data access. UAT helps you drive the behaviour of the system from the requirements of the client.

That’s all these things do though, help you develop in the BDD style. You can still write your code in a non-BDD style using all these frameworks, tools and processes, just as you can develop in the BDD style using none of them.

BDD is a state of mind.

BDD from scratch – Build your own framework (Part 2)

In part one I covered how to organise your tests into the Given-When-Then style through the use of an abstract base class. In this post I’m going to show how you can manage your mocks more efficiently. Though not really a part of BDD I’ve found it very useful in clarifying the behaviour described in my tests as it reduces the amount of noise caused by the definition of variables.

I’m going to walk you through an example test, starting off with what we finished off with in part 1. We are going to be testing this standard ASP.NET MVC controller method:

namespace RobustSoftware.BddFromScratch.WebApplication.Controllers

{

    public class BooksController : Controller

    {

        private readonly IBookService bookService;

        private readonly IAuthenticationService authenticationService;

 

        public BooksController(IBookService bookService, IAuthenticationService authenticationService)

        {

            this.bookService = bookService;

            this.authenticationService = authenticationService;

        }

 

        public ViewResult YourBooks()

        {

            var currentUser = authenticationService.CurrentUser();

            ViewData.Model = bookService.OwnedBy(currentUser);

            return View("YourBooks");

        }

    }

}

I’m going to be using Moq as my mocking framework for this test; again I’ve applied the same concept to other frameworks such as Rhino Mocks. Here is how our test starts out:

namespace RobustSoftware.BddFromScratch.WebApplication.Test.Controllers.Books

{

    public class DisplayingYourBooks : BehaviourTest

    {

        private MockFactory factory;

        private BooksController controller;

        private Mock<IBookService> bookService;

        private Mock<IAuthenticationService> authenticationService;

        private ViewResult result;

        private List<Book> yourBooks;

        private User currentUser;

 

        protected override void Given()

        {

            factory = new MockFactory(MockBehavior.Loose);

 

            bookService = factory.Create<IBookService>();

            authenticationService = factory.Create<IAuthenticationService>();

 

            yourBooks = new List<Book>();

            currentUser = new User();

 

            authenticationService.Setup(x => x.CurrentUser()).Returns(currentUser);

            bookService.Setup(x => x.OwnedBy(currentUser)).Returns(yourBooks);

 

            controller = new BooksController(bookService.Object, authenticationService.Object);

        }

 

        protected override void When()

        {

            result = controller.YourBooks();

        }

 

        [Then]

        public void ShownYourBooksView()

        {

            Assert.AreEqual("YourBooks", result.ViewName);

        }

 

        [Then]

        public void PassedListOfYourBooks()

        {

            Assert.AreSame(yourBooks, result.ViewData.Model);

        }

    }

}

The obvious step to reduce the amount of test code on display would be to move the setup of the MockFactory up into the base class. We’re going to take it a step further than that though and introduce a mock management class. Here is what it looks like:

namespace RobustSoftware.BddFromScratch.Framework

{

    public class MockManager

    {

        private readonly MockFactory factory;

        private readonly IDictionary<Type, object> mockDictionary;

 

        public MockManager()

        {

            factory = new MockFactory(MockBehavior.Loose);

            mockDictionary = new Dictionary<Type, object>();

        }

 

        public Mock<T> Mock<T>() where T : class

        {

            var type = typeof(T);

 

            if (!mockDictionary.ContainsKey(type))

            {

                mockDictionary.Add(type, factory.Create<T>());

            }

 

            return mockDictionary[type] as Mock<T>;

        }

    }

}

This is a wrapper around a dictionary of mock objects. When we ask for a mock that has not been requested before, one is created. Otherwise, the previously created mock is retrieved from the dictionary and returned. We’ll add the creation of the mock manager to our abstract base class and expose the Mock<T> method for use within our tests:

namespace RobustSoftware.BddFromScratch.Framework

{

    [TestFixture]

    public abstract class BehaviourTest

    {

        private MockManager mockManager;

 

        protected Mock<T> Mock<T>() where T : class

        {

            return mockManager.Mock<T>();

        }

 

        [TestFixtureSetUp]

        public void Setup()

        {

            mockManager = new MockManager();

 

            Given();

            When();

        }

 

        protected abstract void Given();

        protected abstract void When();

    }

}

Now we can utilise the mock manager within our original test, reducing the lines of code considerably:

namespace RobustSoftware.BddFromScratch.WebApplication.Test.Controllers.Books

{

    public class DisplayingYourBooks : BehaviourTest

    {

        private BooksController controller;

        private ViewResult result;

        private List<Book> yourBooks;

        private User currentUser;

 

        protected override void Given()

        {

            yourBooks = new List<Book>();

            currentUser = new User();

 

            Mock<IAuthenticationService>().Setup(x => x.CurrentUser()).Returns(currentUser);

            Mock<IBookService>().Setup(x => x.OwnedBy(currentUser)).Returns(yourBooks);

 

            controller = new BooksController(Mock<IBookService>().Object, Mock<IAuthenticationService>().Object);

        }

 

        protected override void When()

        {

            result = controller.YourBooks();

        }

 

        [Then]

        public void ShownYourBooksView()

        {

            Assert.AreEqual("YourBooks", result.ViewName);

        }

 

        [Then]

        public void PassedListOfYourBooks()

        {

            Assert.AreSame(yourBooks, result.ViewData.Model);

        }

    }

}

As we no longer have to worry about variable declaration and assignment for our mocks, we can leave the test code to signal the intent of our test. As there is no real setup for the fields currentUser and yourBooks, I’d be tempted to in-line those variables but this is a matter of personal taste:

namespace RobustSoftware.BddFromScratch.WebApplication.Test.Controllers.Books

{

    public class DisplayingYourBooks : BehaviourTest

    {

        private BooksController controller;

        private ViewResult result;

        private List<Book> yourBooks = new List<Book>();

        private User currentUser = new User();

 

        protected override void Given()

        {

            Mock<IAuthenticationService>().Setup(x => x.CurrentUser()).Returns(currentUser);

            Mock<IBookService>().Setup(x => x.OwnedBy(currentUser)).Returns(yourBooks);

 

            controller = new BooksController(Mock<IBookService>().Object, Mock<IAuthenticationService>().Object);

        }

 

        protected override void When()

        {

            result = controller.YourBooks();

        }

 

        [Then]

        public void ShownYourBooksView()

        {

            Assert.AreEqual("YourBooks", result.ViewName);

        }

 

        [Then]

        public void PassedListOfYourBooks()

        {

            Assert.AreSame(yourBooks, result.ViewData.Model);

        }

    }

}

As you can see, this reduces the amount of code required to establish the same context to test a given behaviour. The less lines of code you need to set up a test, the easier it is going to be to understand that test in the future. This, along with the more explanatory naming of your test, makes it much easier to maintain that test in the future.

Next up, I’ll show how we can utilise our MockManager to implement auto mocking. This will reduce the amount of code in our test slightly, but more importantly it makes our test suite less brittle.

Related posts

BDD from scratch – Build your own framework (Part 1)

BDD is a higher level of unit testing, it creates better documentation of your system by recording its intent which makes your system easier to learn for new developers and relearn for when you revisit your code further down the line.

I’m going to show you how to build your own BDD testing framework on top of a vanilla unit testing framework. For my example I’m going to use NUnit but I’ve applied the same principles with MbUnit, xUnit and it should work with any other unit testing framework too.

What’s good about doing things this way, as opposed to using a purpose built BDD framework like MSpec, is that it allows your tests to exist and be run side-by-side with traditional unit tests. This can ease the migration to the new style if you’re all moving over to it and it lets you write tests in the BDD style for some parts of the system whilst sticking with the traditional unit style for other parts. Though I’ll be amazed if you don’t end up writing all your tests in the BDD style as everyone I’ve shown it too loves it.

I’ll show you how to create the basis for your own BDD framework and I’ll give an example of how a BDD style test looks in comparison to a traditional unit test.

What is BDD all about?

The core concept of BDD is Given-When-Then (often referred to as Arrange-Act-Assert):

  1. Given the system is in a certain state (the context of the test)
  2. When an action is performed on the system (normally calling a single method)
  3. Then a list of assertions should be satisfied

Herein lies another fundamental difference with BDD, you will create a test fixture per scenario rather than a fixture per class as it common with traditional unit testing.

The basis of our framework

The way to transform your standard unit testing framework to a BDD one is to use an abstract base class. This will modify how your tests are run, giving you the ability to specify your Given, When and corresponding Thens separately from one another:

using NUnit.Framework;

 

namespace RobustSoftware.BddFromScratch.Framework

{

    [TestFixture]

    public abstract class BehaviourTest

    {

        [TestFixtureSetUp]

        public void Setup()

        {

            Given();

            When();

        }

 

        protected abstract void Given();

        protected abstract void When();

    }

}

This means that the contents of your Given and When methods will be run once before each of your methods decorated with the Test attribute are run. This is important as it means that all your assertions should not have side effects (this should be the case already). You will use it in a test class by doing something like this:

using System;

using NUnit.Framework;

 

namespace RobustSoftware.BddFromScratch.Framework

{

    public class ExampleBehaviour : BehaviourTest

    {

        protected override void Given()

        {

            // the system is setup in a certain state

        }

 

        protected override void When()

        {

            // a defined action is performed on the system

        }

 

        [Test]

        public void ThisAssertionShouldBeSatisfied()

        {

            Assert.IsTrue(true);

        }

 

        [Test]

        public void AnotherAssertionShouldBeSatisfied()

        {

            Assert.IsTrue(true);

        }

    }

}

Notice that the intent verified by each assertion is documented in the name of the test method. This means that when you run your tests, the name of the tests themselves show you when was meant to happen rather than the error messages that most people forget to add to their assertions.

Another thing that I like to do but is entirely optional is alias the Test attribute to be able to use a Then attribute with the same behaviour:

using NUnit.Framework;

 

namespace RobustSoftware.BddFromScratch.Framework

{

    public class ThenAttribute : TestAttribute

    {

    }

}

This just changes the original example test slightly so it’s more obvious how the Given-When-Then style is being applied:

using System;

using NUnit.Framework;

 

namespace RobustSoftware.BddFromScratch.Framework

{

    public class ExampleBehaviour : BehaviourTest

    {

        protected override void Given()

        {

            // the system is setup in a certain state

        }

 

        protected override void When()

        {

            // a defined action is performed on the system

        }

 

        [Then]

        public void ThisAssertionShouldBeSatisfied()

        {

            Assert.IsTrue(true);

        }

 

        [Then]

        public void AnotherAssertionShouldBeSatisfied()

        {

            Assert.IsTrue(true);

        }

    }

}

Converting a traditional test

I’ll start of with what I think to be a pretty standard unit test that I found in the source of OpenRasta:

[Test]

public void a_change_after_a_creation_results_in_a_new_oject_with_the_same_properties()

{

    var binder = new KeyedValuesBinder("customer", typeof(Customer));

    binder.SetProperty("username", new[] { "johndoe" }, (str, type) => BindingResult.Success(str));

    binder.BuildObject();

 

    binder.SetProperty("firstname", new[] {"john"}, (str, type) => BindingResult.Success(str));

    var customer = (Customer)binder.BuildObject().Instance;

 

    customer.Username.ShouldBe("johndoe");

    customer.FirstName.ShouldBe("john");

}

As I see it this, like many other traditional unit tests, is already split into the Given-When-Then style logically:

[Test]

public void a_change_after_a_creation_results_in_a_new_oject_with_the_same_properties()

{

    // Given - the context we are establishing in the system

    var binder = new KeyedValuesBinder("customer", typeof(Customer));

    binder.SetProperty("username", new[] { "johndoe" }, (str, type) => BindingResult.Success(str));

    binder.BuildObject();

 

    binder.SetProperty("firstname", new[] {"john"}, (str, type) => BindingResult.Success(str));

 

    // When - the action we are performing on the system

    var customer = (Customer)binder.BuildObject().Instance;

 

    // Then - checking the system ends up in the desired state

    customer.Username.ShouldBe("johndoe");

    customer.FirstName.ShouldBe("john");

}

So splitting it physically into the separate sections is not much of a leap:

public class ChangingAnObjectAfterACreation : BehaviourTest

{

    private KeyedValuesBinder binder;

    private Customer customer;

 

    protected override void Given()

    {

        binder = new KeyedValuesBinder("customer", typeof(Customer));

        binder.SetProperty("username", new[] { "johndoe" }, (str, type) => BindingResult.Success(str));

        binder.BuildObject();

 

        binder.SetProperty("firstname", new[] { "john" }, (str, type) => BindingResult.Success(str));

    }

 

    protected override void When()

    {

        customer = (Customer) binder.BuildObject().Instance;

    }

 

    [Then]

    public void UsernameShouldBeJohnDoe()

    {

        Assert.AreEqual("johndoe", customer.Username);

    }

 

    [Then]

    public void FirstNameShouldBeJohn()

    {

        Assert.AreEqual("john", customer.FirstName);

    }

}

I've dropped the use of the assertion extension methods that Sebastien Lambla was using (I’ll cover how to write those later in the series), but otherwise the actual test code has remained the same.

The physical separation of the logical parts of the test makes the test easier to follow and the assertions at least as explicit even without the extension methods being used.

The fact that BDD makes the separation many people already make when writing traditional unit tests more explicit is why I and other people like it. It makes your tests more readable as you can often skim the Given section as that should be conveyed by the name of the entire fixture. That leaves you to see what method is being called and the assertions being made. These are often now more descriptive as they are given a method name.

If you’ve got any questions, leave a comment and I’ll reply as soon as I can. Do the same if you’ve any related topics you’d like to see covered. I’m planning on covering, mock management, auto-mocking, fluent test extensions and shared contexts.

Related posts