magpiebrain

Sam Newman's site, a Consultant at ThoughtWorks

Posts from the ‘Java’ category

Package dependencies in Pasta

Package dependencies in Pasta

Today on my current project I had to spend some time creating a client JAR file for use by downstream systems. I needed to keep the JAR as small as possible, with (more importantly) no external JAR dependencies.

There are tools out there for dependency analysis, such as IBM’s Structural Analysis for Java (aka Smallworlds), Compuware’s Pasta or the open source JDepend. As nice as these tools are, they are for analysis and the gathering of metrics. As I’ve mentioned before, unless ‘good metrics’ are in some way enforced by the build process, it becomes very easy for even automatically gathered metrics to be ignored.

JDepend can be run via Ant, but just like Emma or Findbugs the information gathered is used for reporting purposes – it is not capable of failing a build because someone has introduced an invalid dependency between packages.

Japan is a tool which comes with an Ant plugin that will fail a build if your code violates the allowed package dependencies. For example, our downstream systems only need to use classes in the client package – I don’t want to have to include any other code in the client JAR file. So in a Japan config file I place the following code:

Missing

Now if any of the classes in client include any other packages in my codebase, the build will fail. It’s important to note that Japan requires that you define all your dependencies at the package depth you define (the package-depth="4<a href="http://www.c2.com/cgi/wiki?AcyclicDependenciesPrinciple" <2>> means that all source code in com.company.app.xxx and below will be checked) – so for example if our gui package depended on util and client, I’d have to add the line:

Missing

By defining these configurations you can quickly discover circular dependencies) – for example if to get the build passing you find yourself defining something like this:

Missing

You know something is up. The other thing I like about Japan is the fact that because I can now enforce sensible package dependencies, I feel better about spending some time cleaning our packages up, safe in the knowledge that we won’t backslide (assuming no-one goes and sticks everything in one giant package of course). There was one little problem I had though – I had to disable transitive dependency checking as it caused a stack overflow error, but I think once we remove our existing circular dependencies that should sort itself out. I still of course like to have tools like Pasta to help me define acceptable inter-package dependencies, but I feel much happier having Japan in the build just in case I start getting sloppy :-).

Two of the three bugs on my watch list have in recent days been makred for inclusion in Mustang (Java 6). RFE 4057701 entitled “Need way to find free disk space” has been in since virtually the year dot, and was originally dropped from the first NIO JSR:

Resolution of this request is considered a high priority for Mustang.
Active development on this feature has been in progress for the past
several months. I expect that we will complete this work within the
next couple of months. These methods will be in Mustang-beta.

I’m trying to work how how this feature required “several months” of development, but at least it’s a “high priority”. Lets gloss over the fact that even if I am programming in Java by the time Java 6 is out it’ll be another year or two before my clients will be using it.
<!-more->
Less important, but still a “woho!” moment for me, is the inclusion of RFE 4726365 – “Java 2D to support LCD optimized anti-aliased text (sub-pixel resolution)”.

The quality of Java2D text antialiasing leaves a lot to be
desired in comparison to OS naitve antialiasing, especially
ClearType on Windows.

Java2D uses gray scale antialiasing. At point sizes below a
certain threshold (around 14pt it seems) this just does not
look very good, with text appearing ‘lumpy’ and uneven.
Either the way antialiasing interacts with font hinting
needs to be improved, or there should be a settable
rendering hint so that text below a certain size is not
antialiased. In Windows standard text antialiasing, this
works well. Larger font sizes are antialiased, where the
most benefit is seen, and smaller font sizes are untouched,
making them legible.

Decent anti-aliasing is one of the key features in making Java UI’s fit in better with native applications, and the fact that the UI will pick this up automatically should mean existing applications will look better for free, at least where LCD’s are concerned. Now I hope someone looks at RFE 4109888

When Java first arrived, the initial hype was all about “write once, run anywhere”. It’s language a mess of compromises born out of it’s heritage as a language for embedded machines and a desire to keep C++ programmers happy. Once people got over the novelty of inherently portable code, the attention then fell (initially favourably) on Applets, followed by protests against the non-portable nature of AWT. Penetration outside of the web was minor.
Continue reading…

It was with some interest I spotted a positive mention of something called Context IoC (a new type of IoC apparently) on Dion Hinchcliffe’s Blog. The while topic really bores me right now as IoC stopped being something fancy a long time ago, and to me it’s now nothing more than “calling a proper constructor”. I investigated further as Dion normally writes very sensibly, does some very nice diagrams and has impeccable taste.

Contextual IoC is outlined in a ServerSide piece by Sony Mathew. It’s not surprising I missed it the first time round as I stopped reading the ServerSide a while back. It turned out I really wasn’t missing much. He starts off on shaky ground:

IOC seems to be at odds with the fundamental paradigm of object encapsulation. The concept of upstream propagation of dependencies is indeed a contradiction to encapsulation. Encapsulation would dictate that a caller should know nothing of its helper objects and how it functions. What the helper objects do and need is their business as long as they adhere to their interface – they can grab what they need directly whether it be looking up EJBs, database connections, queue connections or opening sockets, files etc..

He almost makes a point here, but ends up saying “Objects should interface with things with interfaces”. To those of us who create interfaces to reflect the role a concrete class has in a system, this is nothing new. Quite who this is “at odds with the fundamental paradigm of object encapsulation” I’m not quote sure.

Things then look up a bit:

The benefits of IOC are that objects become highly focused in its functionality, highly reusable, and most of all highly unit testable.

That I agree with. Then we’re off again:

The problem with exposing dependencies via constructors or Java bean properties in general is that it clutters an object and overwhelms the real functional purposes of the constructor and Java bean properties.

I agree with exposing setters purely to provide dependencies – but constructors? At this point it’s clear that Sony and I disagree on what a constructor is – apparently creating an object with the tools it needs to do its job is not what a constructor is for. At this point we’re waiting for Sony’s solution – and the solution is Contextual IoC.

Let’s look at Sony’s example of this:


public class PoolGame {

  public interface Context {
    public String getApplicationProperty(String key);
    public void draw(Point point, Drawable drawable);
    public void savePoolGame(String name,
      PoolGameState poolGameState);
    public PoolGameState findPoolGame(String name);
  }

  public PoolGame(Context cxt) {
    this.context = cxt;
  }

  ...
}

Right, so what’s happened here is that we’ve taken dependencies (in the form of method calls – the PoolGame requires and folded them into an inner interface. Question 1 – how is this different from passing in the roles as separate interfaces? It seems to me with have the need for 3 distinct roles, something along the lines of PoolRepository (for savePoolGame() and findPoolGame()), Display (for draw()) and ApplicationConfig (for getApplicationProperty()). My version of the code would be:


public class PoolGame {
  public PoolGame(
    PoolRepository rep,
    Display display,
    ApplicationConfig config) {
    ...
  }
}

So Sony’s code is dependent on having implementations of four methods, my code as a dependency on a implementations of three roles in the system. Lets look at the “benefits” of this approach:

The Context interface removes the clutter of exposing dependencies via constructors or Java bean properties

Does my constructor look any more cluttered than Sony’s inner Context interface? I know my object is going to be easy to create too – how easy is it to create Sony’s? Will it look as simple as constructing my version? So I dispute that benefit. Next:

It provides better clarity by organizing an object’s dependencies into a single personalized interface unique to that class. The PoolGame.Context interface is unique to the PoolGame class.

And that’s a benefit how? It’s a simple matter to see where my roles are used in the system – with contextual IoC you have to look for Context implementations that use things like PoolRepository, which would still need to exist. You’d still need to create things that perform storage and queries of games, things that display the games, or access application properties. But with contextual IoC you also have to create context objects to wrap them up.

Yes, you heard it here first folks, but Sun has finally woken up and decided to include a micro IoC container in the next version of Java! Using a special keyword, the container lets you create an object that can be used straight away, by passing all dependencies in.
Continue reading…

I like unchecked exceptions. Here’s why – I like to know something went wrong without having to explicitly handle every possible error. I like checked exceptions. Here’s why – when I decide that an error should be handled in my system, I want to force it to be handled as a defensive measure. We can argue back and forth about the merits of checked exceptions – Java chose it’s path, .NET another, some people don’t like exceptions at all – but that is a discussion for another place. What I want to look at is how these two types of exception get used on our current project.

As I mentioned before, we’re using unchecked exceptions for error conditions we cannot handle, and checked exceptions we can handle. Unchecked exceptions are allowed to bubble up to the top level, where they are logged.

Checked exceptions are never logged and rethrown – if we had to rethrow the exception because we cannot properly handle it, it means it really should be unchecked, and we should allow the top level to catch and log it. For example we don’t have examples of the following code in our code base:



	

public void doSomething() {

try { doSomeOtherMethod(); } catch (SomeCheckedException e) { log(e); throw new UncheckedException(e); } }

Instead, if we can’t handle doSomeOtherMethod’s exception, it should be unchecked and we should rely on the top-level to catch and log it:



	

public void doSomething() {

... doSomeOtherMethod(); .. }

If we can handle the error generated by doSomeOtherMethod, then the code should be more clearly written as:



	

public void doSomething() {

try { doSomeOtherMethod(); } catch (SomeCheckedException e) { log("Error occurred, but we'll handle it<a href="http://www.magpiebrain.com/archives/2003/11/21/proper_exception_handling" <5>>, e); handleErrorState(e); } }

We’ve handled it immediately, in place, and haven’t propagated any exceptions.

There are of course exceptions to this rule (no pun intended). In the case of third party API’s, we may find that the exceptions they throw don’t match the context in which we are using them. Our JMS-based system assumes messages will be Java objects (de)serialized using XStream and sent as XML. The XStream code is pretty simple:



	

XStream xstream = new XStream(); Object anObject = xstream.fromXML(xmlToDeserialize);

XStream throws unchecked exceptions only – not that you could work that out from the Javadoc (I’m all in favour of lightly documented code, but external API’s should really document the unchecked exceptions they can explicitly throw). For us, we want to be able to handle the fact we gave our deserializer some dodgy XML – which implies we need a checked exception. To handle this, we explicitly catch the StreamException, but not Throwable:



	

XStream xstream = new XStream(); try {

Object anObject = xstream.fromXML(xmlToDeserialize); } catch(StreamException e) { throw new DeserializerException(e); }

The reason we don’t catch all exceptions, is that we are happy we can handle the outcome if a StreamException can be thrown – we understand that scenario. If XStream started throwing NullPointerException’s however, it’s a brand new scenario we haven’t planned for – we have to assume we can’t handle that one. Put another way, what if we were using a new version of the XStream library that had a bug in it? By assuming any unchecked exception thrown by XStream was down to bad XML, we could end up throwing away lots of valid data.

and Catching uncaught exceptions )

Ian Griffiths has a similar take on exception handling, with more emphasis on application tiers/layers. Also note that I don’t claim that I as a person (or the development team as a whole) came up with this approach – as Ian’s post shows there is bound to be plenty of prior art out there 🙂

We’re currently working on a JMS-drived application, which is being used as an integration point between several systems. We’ve defined a standard exception handling process – checked exceptions for those errors that can be handled by the system, unchecked exceptions for those errors that cannot be handled. The unchecked exceptions are all allowed to bubble up, where they are caught at the top level (and the top level only).

Given that the entire system is driven by received messages, the “top” of our application is our MessageListener, which before today looked a little like this:



	

public void onMessage(Message m) {

try { process(m); } catch (Throwable t) { log(t); } }

As per our exception handling strategy, this is the only place where unchecked exceptions are caught. Today we started implementing the various sad-path scenarios, the first of which was “what happens if processing the message fails?”. In our case this scenario translates as process throwing an unchecked exception. This being our first scenario, we’ve assumed that any runtime exception is a transient error – possibly down to one of the systems we’re integrating with being down. As such we decided that we’ll want to attempt to process the message again.

The simplest way to re-try a message is to use a transactional JMS session and rollback the session, as this returns the message back on to the topic/queue – we’d then specify a maximum number of times a message can be retried. It also follows that when using a transactional session you need to commit the session if the message was successfully processed.

Adding the code to commit a transaction is straightforward – but we do have to expose the session inside the listener (we’re using a topic here for monitoring purposes):



	

public OurMessageListener(TopicSession topicSession) {

this.topicSession = topicSession; }

public void onMessage(Message m) {

try { process(m); //if we get here, we've processed the message topicSession.commit(); } catch (Throwable t) { log(t); } }

Adding the code to roll a session back is a bit more work:



	

public void onMessage(Message m) {

try { process(m); topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); } }

Great so far, but our next “sad-path” scenario is going to give us a little more trouble. What if we receive a message that we can’t translate into something meaningful? We don’t want to re-try the message, as we know we’re not going to be able to handle it later – the message is just plain bad. To handle this case, we separated out the message processing and had it throw a checked exception:



	

public void onMessage(Message m) {

try { Object command = new MessageHandler(m); process(command); topicSession.commit(); } catch (MessageInvalidException e) { log(t); //we don't want to retry the message topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); } }

This works as far as it goes, but up to this point I’ve been oversimplifying things a little. We’d abstracted our use of JMS behind two simple methods, which so far had been good enough for use in all our tests, and both client and server code:



	

public interface MessagingFacade {

void subscribe(MessageListener listener); void publish(String xmlToSend); }

The subscribe call hides all the JMS plumbing – including the creation of the session itself. If we want to pass our session into the message listener, we need to expose it from the MessagingFacade or create it ourself – either way we kind of defeat the object of the facade. If we don’t use the facade, we end up complicating much of our code.

The solution we came up with was to create a TransactionalMessagingListener like so:



	

public class TransactionalMessagingListener

implements MessageListener { public TransactionalMessagingListener( TopicSession topicSession, MessageListener delegate) { ... } public void onMessage(Message m) { try { delegate.onMessage(m) topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); }

}

Our underlying message listener is no longer the top of our system so doesn’t need to log throwable. Nor does it need to worry about the TopicSession, so becomes much simpler – we catch and log the checked exception related to message processing and let any unchecked exceptions bubble up to the TransactionalMessagingListener:



	

public void onMessage(Message m) {

try { Object command = new MessageHandler(m); process(command); } catch (MessageInvalidException e) { log(t); } }

And finally we change our MessagingFacade a little, making the subscribe method more specific by calling it subscribeWithTransaction, and wrapping the listener with our new TransactionalMessageListener:



	

public void subscribeWithTransaction(

MessageListener listener) { ... TransactionalMessageListener txnListener = new TransactionalMessageListener( topicSession, listener ); ... }

And there we have it. All the code is simple and testable – and not a dynamic proxy in sight (take that AOP nut-cases!). I still can’t help thinking there was a simpler way of handling this though…

JSR 270 covers the proposal for the J2SE 6.0 release. Unlike most other JSR’s, it is not defining any new API’s, rather it is defining which JSR’s are being considered for inclusion in what will no-doubt be called Java 6. Looking down the list I was struggling to find much on the list I actually wanted…

<a href="http://jcp.org/en/jsr/detail?id=105

JSR 105: XML Digital Signature APIs>

Got to admit, this seems inoffensive. It implements a W3C spec from 2002, which given how fast these things move is about par for the course. It’ll probably be handy for people doing web services and such, but does it need to be in the core JDK?

<a href="http://jcp.org/en/jsr/detail?id=268

JSR 268: JavaTM Smart Card I/O API>

So, hands up those of us who are programming for Smart Cards. Now keep your hands up if the lack of inclusion of the Smart Card I/O API into J2SE is causing you a problem. While I don’t dispute we’ll see more Smart Cards about in the future, is this really core functionality? More core than, oh I don’‘t know, being able to work out how much disk space is free?

<a href="http://jcp.org/en/jsr/detail?id=199

JSR 199: JavaTM Compiler API>

Something interesting, and what in my mind new releases of J2SE should really be about.

While the existing command-line versions of compiler receive their inputs from the file systems and deposit their outputs there, reporting errors in a single output stream, the new compiler API will allow a compiler to interact with an abstraction of the file system. This abstraction will likely be provided by an extension of the NIO facilities in Tiger (1.5), and allow users to provide source and class files (and paths) to the compiler in the file system, in jar files, or in memory, and allowing the compiler to deposit its output similarly. Diagnostics will be returned from a compiler as structured data, with both pre- and post-localization messages available.

In addition, the new API should provide a facility for a compiler to report dependency information among compilation units. Such information can assist an integrated development environment in reducing the scope of future recompilations.

Sounds good to me, especially the bit about the compiler reporting dependencies. No complaints here.

<a href="http://jcp.org/en/jsr/detail?id=202

JSR 202: JavaTM Class File Specification Update>

I’m a little confused about this – I thought these changes were required for Tiger, and as such would already be included – JSR-heads feel free to put be straight on this.

<a href="http://jcp.org/en/jsr/detail?id=221

JSR 221: JDBCTM 4.0 API Specification>

This version focuses on making API access simpler, and improving connection management, all while being backwards compatible. No complaints.

<a href="http://jcp.org/en/jsr/detail?id=222

JSR 222: JavaTM Architecture for XML Binding (JAXB) 2.0>

This has stumped me a bit. JAXB is a nice idea, but the generated classes suck and it has a whole load of dependencies. I suppose if JAXB is the way to go, then including it allows it to compete with the (nicer) XSD functionality in .NET. As for how much space adding all the JAXB dependencies will add to the overall JDK size…

<a href="http://jcp.org/en/jsr/detail?id=223

JSR 223: Scripting for the JavaTM Platform>

Fear not, this is not the inclusion of Groovy (or, horror of horrors BeanShell) in J2SE, it is actually about exposing Java objects to scripting languages. PHP will be supported by the specification (although the capacity will be there to support other languages). Whilst the JSR talks mostly about web-scripting, it should work equally well within a standard JVM. I don’t object to the concept, and am neutral to it’s inclusion in the core J2SE.

<a href="http://jcp.org/en/jsr/detail?id=224

JSR 224: JavaTM API for XML-Based RPC (JAX-RPC) 2.0>

Another focus on supporting ease of development, as well as supporting the newer W3C specs, and will support non-HTTP transports, so I guess it’s no bad thing – if the Java->WSDL->Java stuff gets improved and simplified too I’m happy, but again it does seem like this is being included to compete feature to feature with .NET rather than focusing on what most Java developers want in the core.

<a href="http://jcp.org/en/jsr/detail?id=260

JSR 260: JavadocTM Tag Technology Update>

I use Javadoc so little, and when I do I rarely use any of it’s more advanced features. As such I think it unlikely I’ll be using any of these new tags:

* categorization of methods and fields by their target use

  • semantical index of classes and packages
  • distinction of static, factory, deprecated methods from ordinary methods
  • distinction of property accessors from ordinary methods
  • combining and splitting information into views
  • embedding of examples, common use-cases along with Javadoc

But I know someone has been waiting for this stuff for ages, right?

<a href="http://jcp.org/en/jsr/detail?id=269

JSR 269: Pluggable Annotation Processing API>

Finally we end up with this JSR, which on first glance appears to improve the handling of annotations to make it easier to build bespoke annotation handlers, which I believe brings Java closer to .NET’s annotations.

Summary

No real “hot damn, I want me one of those” moments here. Whilst it might be unfair to judge the proposed inclusions this early on (the JSR has only just been released for review) a large amount of the inclusions seem to be there to compete with similar functionality already in the .NET framework. Perhaps those of us who’ll never use this stuff won’t begrudge the larger download and the increasingly bloated feel of the JDK, or perhaps I’ll be completely wrong, and in two years from now we’ll all be developing Smartcard distributed web-services systems… J2SE 5 focused on new languages features and the improved concurrency utilities (incidentally, Doug Lea is all over allot of the above JSR’s), but so far J2SE 6.0 doesn’t seem to have as much focus. But if it can deliver on it’s promise of a better developer experience when it comes to web-services, then all those extra megabytes might be worth it.

First off, I’d like to say that 100% test coverage does not mean that your code is bug free – it simply means you haven’t written a test good enough to find a bug. Your tests could even cause all of your code to be executed, but not have a single assert anywhere.

How much assurance you can gain from having unit tests depends on the quality of the test itself, and on how much of your code is covered by tests. Tools like Clover and Emma can tell you what code gets checked as part of a test run.

A build is considered passed if it meets certain criteria. Typically these criteria include the fact that it compiles, or that the unit tests pass. What is to stop the code coverage being used as one of these criteria? Beforehand you agree a level below which code coverage is not allowed to fall (in an existing project this is easier – you could set the existing code coverage percentage as the target). Any new code has to have at least the same coverage if not better for the build to pass. If you submit new code in a continuous integration environment and the code coverage is worse, your submission gets rejected. In practise this will hopefully result in improving code coverage, which will also result in the target level being increased.

A quick post those people reading this site via JavaBlogs. You might like to know that you’ll no longer be receiving my “del.ico.us(My del.icio.us links)”:http://del.icio.us/padark links spliced in with my feed. If you still want to get them, you’ll need to subscribe to my “FeedBurner feed(magpiebrain – summary posts with del.icio.us links spliced in, RSS 2.0)”:http://feeds.feedburner.com/Magpiebrain. All available feed-types are listed on my “feeds page(magpiebrain – available RSS feeds)”:http://www.magpiebrain.com/feeds, or are available via the normal “auto-discovery techniques(http://diveintomark.org/archives/2002/05/30/rss_autodiscovery)”:http://diveintomark.org/archives/2002/05/30/rss_autodiscovery.