Sam Newman's site, a Consultant at ThoughtWorks

First off, I’d like to say that 100% test coverage does not mean that your code is bug free – it simply means you haven’t written a test good enough to find a bug. Your tests could even cause all of your code to be executed, but not have a single assert anywhere.

How much assurance you can gain from having unit tests depends on the quality of the test itself, and on how much of your code is covered by tests. Tools like Clover and Emma can tell you what code gets checked as part of a test run.

A build is considered passed if it meets certain criteria. Typically these criteria include the fact that it compiles, or that the unit tests pass. What is to stop the code coverage being used as one of these criteria? Beforehand you agree a level below which code coverage is not allowed to fall (in an existing project this is easier – you could set the existing code coverage percentage as the target). Any new code has to have at least the same coverage if not better for the build to pass. If you submit new code in a continuous integration environment and the code coverage is worse, your submission gets rejected. In practise this will hopefully result in improving code coverage, which will also result in the target level being increased.

5 Responses to “Enforcing test coverage in the build”

  1. Simon Brunning

    Have you seen Jester[1]? It finds out if your code is actually being tested by *mutating* it. If a piece of code can change without causing your tests to fail, then clearly it isn’t being tested.

    Never had a go myself, and I imagine it must take *ages* – an occasional thing, not part of your standard build – but it looks cool.

    [1] “”:

  2. Sam Newman

    Yeah – a nice idea but from what I’ve heard is of little practical use. I know the guy who wrote it, and he doesn’t even use it 🙂

  3. Robert Watkins

    You do have to be somewhat careful with this idea. In a team that doesn’t really buy into the test coverage concept, you find (well, _I_ have found) that people will write silly little tests to give 100% coverage. Also, people seem to _stop_ writing tests when they get 100% coverage, even though a lot of the possible interactions haven’t been explorered yet.

    I find it useful to set a threshold level, and then examine the fluctations in coverage levels over time (something the clover history makes very easy). If the threshold level is 100%, then there is no fluctuation; this implies to me that I would be losing some information.

    With a team that is committed to writing meaningful tests, and with the maturity to do so, this becomes less of an issue, of course.

  4. Sam Newman

    Agreed. Attempting to enforce an ideal in an automated fashion which no-one agrees with is never going to be easy, practical or sensible. If you take a look at Howard’s presentation, his “pyramid of quality” shows different practises that can be used, each one being dependent on the one before. For example you can’t sell test coverage as being important unless your team likes tests.

    I too would be dubious about 100% test coverage across anything other than a trivial code base. What I don’t have is any real notion of what a reasonable expectation to set in terms of coverage – in many ways introducing this practise on an existing codebase would be much easier, as you already have a base level to aim for.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Basic HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS

%d bloggers like this: