magpiebrain

Sam Newman's site, a Consultant at ThoughtWorks

Posts from the ‘Java’ category

*We need a snappier name…

As the calendar gets a little crowded in December, myself, Simon and Jez have decided to combine the Java, Django/Rails and Python nights together in one big bash at the Old Bank Of England, on the 12th of December, from 7pm onwards. As normal there is no particular agenda, although people wanting to do demos are more than welcome (just let me know first so we can publicise it).

Oh, and as Jez has had to stick a deposit down for the reserved space, please leave a comment to let us know you’re coming!

A recent post by Cedric in response to an article by Michael Feathers (danger, community navel gazing alert! Obviously this blog is above such things) casts doubt on the necessity of unit tests.

Michael’s rules on what a unit test is seem overly draconian to some (not being able to touch the filesystem for example – 99% of the time I’d agree, but if I’m testing my FileConfiguration class, what the hell am I supposed to do?) – and despite many other people attempting to define what a unit test is, Cedric’s take on it comes the closest to my own understanding of what a unit test is:

…a unit test is traditionally defined as a set of test that exercises a single class in isolation of all others.

Cedric then goes on to say that 90% of the tests he sees written with JUnit are not unit tests. I for one would be wary of working on those codebases. He also states that he doesn’t care if the test he writes is a unit or functional test – which immediately makes me worry about working on his codebase.

The point that gets missed here is that Unit and Functional tests are not different because of how they work, how long they take or the tools required to make them work, but they are different because they fulfill different roles in development.

Unit tests, because they focus on testing one class and one class only, make it much easier to create highly-focused loosely coupled code, as you create and test a class in isolation of other code. As you have good test coverage at a single class level, you can then safely refactor the class and know you haven’t changed the behavior of the unit. As you are writing code and testing at a small level, you can commit your code more frequently, enabling frequent, smaller-scale integration. If you are testing at a class level, the test themselves become a living document detailing how your code works.

Functional tests are supposed to test slices through your system – from UI to back-end and back if possible. You are asserting that your system conforms to the behavior expected by your users.

Unit tests tend to be fast because they limit the code they test. Functional tests tend to be slow, because they test more code, and also interact with slow components (e.g. databases, browsers, networks etc) but their speed doesn’t define what type of test they are.

Often there is the need to separate fast tests from slow tests – and TestNG’s categorisation of tests is perhaps the first feature of the TestNG that has interested me. But the categorisation is done in order to define fast feedback cycles (e.g. run a quick build on checkin, and longer build on the hour, everyday etc – see Speeding up the build and build pipelining for more details) this should not be used to confuse functional and unit tests. Developers need to understand the different requirements for, and benefits of, these two different types of tests.

It’s been a while since I’ve had to unleash the angry kitten, but it seems that the OpenAdaptor developers have rather inadvisably irked me, to the extent that I’ve withheld the kitten’s food for two days now, to make the ensuing carnage all the more…well, carnagy.

To start with, I like the concept and some of the architecure behind Open Adaptor. It’s a fine arhictecture – it’s no suprise that plenty of other APIs out there have come up with similar solutions to the same problems. The JBI spec obviously owes a lot to OpenAdaptor. That said its implementation frequently makes me want to have a few minutes with the developers, a pair of pliers and a blowtorch.

Creating sinks, pipes and sources

Let’s start with the fact that to create anything you’re pratically forced to use property files. No Inversion Of Control for you my son, oh no – here we have a special init method, which is not a constructor because…well, it’s called init, its enforced by the API, and it takes a Properties object. The only benifit over being able to construct objects the way you want seems to be it made it easy for the developers to use property files (I hope for the next version they at least look at spring). That makes tesing so much fun. Please note the sarcasm.

JARs not built using debug information

Yes, you heard me right. Here was I, thinking we lived in the age of high-speed internet connections. But in fact it seems internet bandwidth is so tight that the developers considered providing debug information so we could work out for which odd reason an Adaptor fails a luxury.

Errors that are not errors – or might be

So when your adaptor fails with no clear reason why – well, what should you do? That’s right, turn on debug – chances are a debug message will detail why your adaptor failed. Even better, sometimes the error message will be vague, for example the one that tells me that either my JNDI server may be unavailable, or that it is there but the thing I want isn’t, or that it is there and the thing I want is there, only I might be missing a class, only it won’t tell me what class that might be. That is an error message that can be really fun to work with. And I don’t mean the good type of fun either, I mean the achingly irritating kind of fun that can only result in me hurting my foot kicking office furniture.

A nice idea crippled by an allergy to interfaces

OpenAdaptor has a rather sensible and useful (sounding) feature called message hospitals. Normally, if a message causes an exception the app shuts down. You can however choose to configure a message hospital which will capture all messages which throw an exception – but only an exception of a given type. However by “type” OpenAdaptor means “class” – you have to throw a checked PipelineException. This is bad enough – I now have to have a top-level catch throwable clause that rethrows PipelineException – but the constructor doesn’t even take a root cause. Quite why a simple ExceptionType interface couldn’t be used I have no idea.

Conclusion

OpenAdaptor has a sound premise, but it’s implementation needs bringing up to date. A full rewrite in the form of OpenAdaptor 3 is underway – get involved.

Package dependencies in Pasta

Package dependencies in Pasta

Today on my current project I had to spend some time creating a client JAR file for use by downstream systems. I needed to keep the JAR as small as possible, with (more importantly) no external JAR dependencies.

There are tools out there for dependency analysis, such as IBM’s Structural Analysis for Java (aka Smallworlds), Compuware’s Pasta or the open source JDepend. As nice as these tools are, they are for analysis and the gathering of metrics. As I’ve mentioned before, unless ‘good metrics’ are in some way enforced by the build process, it becomes very easy for even automatically gathered metrics to be ignored.

JDepend can be run via Ant, but just like Emma or Findbugs the information gathered is used for reporting purposes – it is not capable of failing a build because someone has introduced an invalid dependency between packages.

Japan is a tool which comes with an Ant plugin that will fail a build if your code violates the allowed package dependencies. For example, our downstream systems only need to use classes in the client package – I don’t want to have to include any other code in the client JAR file. So in a Japan config file I place the following code:

Missing

Now if any of the classes in client include any other packages in my codebase, the build will fail. It’s important to note that Japan requires that you define all your dependencies at the package depth you define (the package-depth="4<a href="http://www.c2.com/cgi/wiki?AcyclicDependenciesPrinciple" <2>> means that all source code in com.company.app.xxx and below will be checked) – so for example if our gui package depended on util and client, I’d have to add the line:

Missing

By defining these configurations you can quickly discover circular dependencies) – for example if to get the build passing you find yourself defining something like this:

Missing

You know something is up. The other thing I like about Japan is the fact that because I can now enforce sensible package dependencies, I feel better about spending some time cleaning our packages up, safe in the knowledge that we won’t backslide (assuming no-one goes and sticks everything in one giant package of course). There was one little problem I had though – I had to disable transitive dependency checking as it caused a stack overflow error, but I think once we remove our existing circular dependencies that should sort itself out. I still of course like to have tools like Pasta to help me define acceptable inter-package dependencies, but I feel much happier having Japan in the build just in case I start getting sloppy :-).

Two of the three bugs on my watch list have in recent days been makred for inclusion in Mustang (Java 6). RFE 4057701 entitled “Need way to find free disk space” has been in since virtually the year dot, and was originally dropped from the first NIO JSR:

Resolution of this request is considered a high priority for Mustang.
Active development on this feature has been in progress for the past
several months. I expect that we will complete this work within the
next couple of months. These methods will be in Mustang-beta.

I’m trying to work how how this feature required “several months” of development, but at least it’s a “high priority”. Lets gloss over the fact that even if I am programming in Java by the time Java 6 is out it’ll be another year or two before my clients will be using it.
<!-more->
Less important, but still a “woho!” moment for me, is the inclusion of RFE 4726365 – “Java 2D to support LCD optimized anti-aliased text (sub-pixel resolution)”.

The quality of Java2D text antialiasing leaves a lot to be
desired in comparison to OS naitve antialiasing, especially
ClearType on Windows.

Java2D uses gray scale antialiasing. At point sizes below a
certain threshold (around 14pt it seems) this just does not
look very good, with text appearing ‘lumpy’ and uneven.
Either the way antialiasing interacts with font hinting
needs to be improved, or there should be a settable
rendering hint so that text below a certain size is not
antialiased. In Windows standard text antialiasing, this
works well. Larger font sizes are antialiased, where the
most benefit is seen, and smaller font sizes are untouched,
making them legible.

Decent anti-aliasing is one of the key features in making Java UI’s fit in better with native applications, and the fact that the UI will pick this up automatically should mean existing applications will look better for free, at least where LCD’s are concerned. Now I hope someone looks at RFE 4109888

When Java first arrived, the initial hype was all about “write once, run anywhere”. It’s language a mess of compromises born out of it’s heritage as a language for embedded machines and a desire to keep C++ programmers happy. Once people got over the novelty of inherently portable code, the attention then fell (initially favourably) on Applets, followed by protests against the non-portable nature of AWT. Penetration outside of the web was minor.
Continue reading…

It was with some interest I spotted a positive mention of something called Context IoC (a new type of IoC apparently) on Dion Hinchcliffe’s Blog. The while topic really bores me right now as IoC stopped being something fancy a long time ago, and to me it’s now nothing more than “calling a proper constructor”. I investigated further as Dion normally writes very sensibly, does some very nice diagrams and has impeccable taste.

Contextual IoC is outlined in a ServerSide piece by Sony Mathew. It’s not surprising I missed it the first time round as I stopped reading the ServerSide a while back. It turned out I really wasn’t missing much. He starts off on shaky ground:

IOC seems to be at odds with the fundamental paradigm of object encapsulation. The concept of upstream propagation of dependencies is indeed a contradiction to encapsulation. Encapsulation would dictate that a caller should know nothing of its helper objects and how it functions. What the helper objects do and need is their business as long as they adhere to their interface – they can grab what they need directly whether it be looking up EJBs, database connections, queue connections or opening sockets, files etc..

He almost makes a point here, but ends up saying “Objects should interface with things with interfaces”. To those of us who create interfaces to reflect the role a concrete class has in a system, this is nothing new. Quite who this is “at odds with the fundamental paradigm of object encapsulation” I’m not quote sure.

Things then look up a bit:

The benefits of IOC are that objects become highly focused in its functionality, highly reusable, and most of all highly unit testable.

That I agree with. Then we’re off again:

The problem with exposing dependencies via constructors or Java bean properties in general is that it clutters an object and overwhelms the real functional purposes of the constructor and Java bean properties.

I agree with exposing setters purely to provide dependencies – but constructors? At this point it’s clear that Sony and I disagree on what a constructor is – apparently creating an object with the tools it needs to do its job is not what a constructor is for. At this point we’re waiting for Sony’s solution – and the solution is Contextual IoC.

Let’s look at Sony’s example of this:


public class PoolGame {

  public interface Context {
    public String getApplicationProperty(String key);
    public void draw(Point point, Drawable drawable);
    public void savePoolGame(String name,
      PoolGameState poolGameState);
    public PoolGameState findPoolGame(String name);
  }

  public PoolGame(Context cxt) {
    this.context = cxt;
  }

  ...
}

Right, so what’s happened here is that we’ve taken dependencies (in the form of method calls – the PoolGame requires and folded them into an inner interface. Question 1 – how is this different from passing in the roles as separate interfaces? It seems to me with have the need for 3 distinct roles, something along the lines of PoolRepository (for savePoolGame() and findPoolGame()), Display (for draw()) and ApplicationConfig (for getApplicationProperty()). My version of the code would be:


public class PoolGame {
  public PoolGame(
    PoolRepository rep,
    Display display,
    ApplicationConfig config) {
    ...
  }
}

So Sony’s code is dependent on having implementations of four methods, my code as a dependency on a implementations of three roles in the system. Lets look at the “benefits” of this approach:

The Context interface removes the clutter of exposing dependencies via constructors or Java bean properties

Does my constructor look any more cluttered than Sony’s inner Context interface? I know my object is going to be easy to create too – how easy is it to create Sony’s? Will it look as simple as constructing my version? So I dispute that benefit. Next:

It provides better clarity by organizing an object’s dependencies into a single personalized interface unique to that class. The PoolGame.Context interface is unique to the PoolGame class.

And that’s a benefit how? It’s a simple matter to see where my roles are used in the system – with contextual IoC you have to look for Context implementations that use things like PoolRepository, which would still need to exist. You’d still need to create things that perform storage and queries of games, things that display the games, or access application properties. But with contextual IoC you also have to create context objects to wrap them up.