magpiebrain

Sam Newman's site, a Consultant at ThoughtWorks

Archive for ‘January, 2005’

Yes, it’s true. The all pervading air of Mac Mini hype has finally brought me to my knees – no longer can I hope to stand up to such a barrage of “but it’s so small” or “but its so cheap!” and most importantly “but it’s so cute – you can buy one if you want, but I get to use it!”. The current plan is to twin it with Elgato’s EyeTV 410, and a 20” Apple Cinema display. Sure, after I add that, a DVD burner, more memory, and an Apple Care package it isn’t looking quite so cheap, but what the hey, it is damn cute – those comparing it with similar priced options from Dell are completely missing the point if you ask me.

So what I’ll end up with will be a Freeview-recording, DVD burning, time-shifting monster, in a relatively nice package. Who knows, it might even replace my TiVo. Now I wonder if I’ll be able to use VNC to watch TV on my laptop….hmm….

As described before, the build pipeline is a series of builds, each performing some specific task. The result of one build becomes the input of the next.

Many people see the pipeline describing only those parts which can be automated – as such, you’ll often see the pipeline end far short of production ready code – once it’s got past the final automated barrier, the code then plunges into a grey mass of manual, distributed, often ad-hoc processes. For the build pipeline to work at all, it has to be continued all the way to producing production ready code. That is not to say the whole pipeline should be automated.

Continue reading…

Update 1: Fixed arrows on two of the images – damn Visio…

Update 2: Tweaked the final diagram to show build artifacts being checked in and retrieved

Lets look at a fairly simple build.

dev_build.gif
Continue reading…

First off, I’d like to say that 100% test coverage does not mean that your code is bug free – it simply means you haven’t written a test good enough to find a bug. Your tests could even cause all of your code to be executed, but not have a single assert anywhere.

How much assurance you can gain from having unit tests depends on the quality of the test itself, and on how much of your code is covered by tests. Tools like Clover and Emma can tell you what code gets checked as part of a test run.

A build is considered passed if it meets certain criteria. Typically these criteria include the fact that it compiles, or that the unit tests pass. What is to stop the code coverage being used as one of these criteria? Beforehand you agree a level below which code coverage is not allowed to fall (in an existing project this is easier – you could set the existing code coverage percentage as the target). Any new code has to have at least the same coverage if not better for the build to pass. If you submit new code in a continuous integration environment and the code coverage is worse, your submission gets rejected. In practise this will hopefully result in improving code coverage, which will also result in the target level being increased.

Update: Fixed typo

Introduction

Slow builds are perhaps the most irritating thing for a developer. Having worked on two projects now with +30 minute continuous integration builds, it’s something of a personal bugbear for me. Before we look at some ways to speed up the build, I thought I’d start with my definition of what a build isÂ…

What is a build?

Quite simply a build is the process of taking source code and producing some artifacts. The artifacts could be a war file, some test output or even a full blown desktop application. A project might (and probably should) have several different builds for different purposes – one for developers, QA, production etc.

Speeding up the build

If you have a build which you want to go faster, there are two approaches you can take

  1. Speed up the creation of the artifacts themselves
  1. Reduce the number of artifacts being created

    I’m not really going to talk much about the first point – that is typically either trivial or very hard. If you have to compile 10,000 source files there is a relatively little you can do to speed that up. If you have some tests which are IO bound, you can speed them up by mocking out the IO if possible.

    The biggest wins come when you can reduce what the build is doing, either by reducing the number of artifacts it is trying to produce, or reducing the scope of the artifacts being produced.

    Option 1: Reduce the work

    You might consider splitting the build on horizontal lines, in order to create fewer artifacts. For example a typically developer build will compile and run the unit tests, and might produce a deployed application from exploratory testing. You could split this single build into three separate builds – one to compile the code, one to test it, the other to create the deployed app. This is only useful if not all the artifacts are required – in many circumstances they are. One technique that has worked in my experience is the idea of a tiered build – we had two continuous integration builds, one designed for fast developer feedback which ran just the unit tests, and a longer QA build which runs all the regression and functional tests. This is something I’ll go into at a later date.

    Option 2: Divide and conquer

    The other approach is to split things along vertical lines. In the case of the developer build, keep the build compiling, testing and deploying the code, but just have it compile, test and deploy less code, by splitting the code along functional boundaries. This will work as long as the developer is working completely within said functional boundary – subdivide your code base too finely and you’ll introduce too much complexity in terms of integrating your artifacts. Subdivide too broadly and you’ll end up compiling and testing code needlessly.

    Attempting to break up a code base after work has already started is not a simple job. It will involve disruption to development, and your development effort will need to be refocused to work as much as possible within your new components. Working out where to break code up before you start work is likewise not easy – it might not be clear initially where the divisions can be made, and you might find out later that you didn’t need to break up the code, in which case all you’ve done is add complexity to your build process. In either case, whilst the individual component builds will end up being simpler, and the builds for each component shorter, builds at the integration level will become more complex.

A quick run-through of the least year at magpiebrain is in order I think:

  • January saw me looking at IoC and getting far more interested than I should of. I also found del.icio.us, and the web was a good place again.
  • February and I was playing with Spring, XWork and TDD. My article on IoC goes up at java.net – shame on me!
  • March and I was writing up stuff on regular expressions before I started my current job.
  • April saw me finally sort out permalinks and send a new site design live. A long commute also saw some longer than average posts.
  • May and I was playing around with a few things I never came back to.
  • June came around and I was playing with Naked Objects, I upgraded this site to Movable Type 3, and I went to Glastonbury.
  • July and I started looking beyond the crap a bit more, and purposely started writing things to piss people off. It kind of worked…
  • August was a very slow month, as my new client was sapping all ability to even think about interesting (or otherwise) topics.
  • September was another slow month, for much the same reason, but I did get to escape from being build-bitch long enough to code for a couple of days, so it wasn’t all bad.
  • October – ditto
  • November – ditto. It seems being a build-bitch isn’t something which promotes frequent blogging.
  • December was much better, thanks in part to managing to make it to a Java meet-up in London, work finally generating some subjects which I thought blogging about would be of interest, and I found out I was off to a new client in the new year.

    So happy new year everyone – roll on 2005…