magpiebrain

Sam Newman's site, a Consultant at ThoughtWorks

Archive for ‘April, 2005’

Yes, you heard it here first folks, but Sun has finally woken up and decided to include a micro IoC container in the next version of Java! Using a special keyword, the container lets you create an object that can be used straight away, by passing all dependencies in.
Continue reading…

Advertisements

I like unchecked exceptions. Here’s why – I like to know something went wrong without having to explicitly handle every possible error. I like checked exceptions. Here’s why – when I decide that an error should be handled in my system, I want to force it to be handled as a defensive measure. We can argue back and forth about the merits of checked exceptions – Java chose it’s path, .NET another, some people don’t like exceptions at all – but that is a discussion for another place. What I want to look at is how these two types of exception get used on our current project.

As I mentioned before, we’re using unchecked exceptions for error conditions we cannot handle, and checked exceptions we can handle. Unchecked exceptions are allowed to bubble up to the top level, where they are logged.

Checked exceptions are never logged and rethrown – if we had to rethrow the exception because we cannot properly handle it, it means it really should be unchecked, and we should allow the top level to catch and log it. For example we don’t have examples of the following code in our code base:



	

public void doSomething() {

try { doSomeOtherMethod(); } catch (SomeCheckedException e) { log(e); throw new UncheckedException(e); } }

Instead, if we can’t handle doSomeOtherMethod’s exception, it should be unchecked and we should rely on the top-level to catch and log it:



	

public void doSomething() {

... doSomeOtherMethod(); .. }

If we can handle the error generated by doSomeOtherMethod, then the code should be more clearly written as:



	

public void doSomething() {

try { doSomeOtherMethod(); } catch (SomeCheckedException e) { log("Error occurred, but we'll handle it<a href="http://www.magpiebrain.com/archives/2003/11/21/proper_exception_handling" <5>>, e); handleErrorState(e); } }

We’ve handled it immediately, in place, and haven’t propagated any exceptions.

There are of course exceptions to this rule (no pun intended). In the case of third party API’s, we may find that the exceptions they throw don’t match the context in which we are using them. Our JMS-based system assumes messages will be Java objects (de)serialized using XStream and sent as XML. The XStream code is pretty simple:



	

XStream xstream = new XStream(); Object anObject = xstream.fromXML(xmlToDeserialize);

XStream throws unchecked exceptions only – not that you could work that out from the Javadoc (I’m all in favour of lightly documented code, but external API’s should really document the unchecked exceptions they can explicitly throw). For us, we want to be able to handle the fact we gave our deserializer some dodgy XML – which implies we need a checked exception. To handle this, we explicitly catch the StreamException, but not Throwable:



	

XStream xstream = new XStream(); try {

Object anObject = xstream.fromXML(xmlToDeserialize); } catch(StreamException e) { throw new DeserializerException(e); }

The reason we don’t catch all exceptions, is that we are happy we can handle the outcome if a StreamException can be thrown – we understand that scenario. If XStream started throwing NullPointerException’s however, it’s a brand new scenario we haven’t planned for – we have to assume we can’t handle that one. Put another way, what if we were using a new version of the XStream library that had a bug in it? By assuming any unchecked exception thrown by XStream was down to bad XML, we could end up throwing away lots of valid data.

and Catching uncaught exceptions )

Ian Griffiths has a similar take on exception handling, with more emphasis on application tiers/layers. Also note that I don’t claim that I as a person (or the development team as a whole) came up with this approach – as Ian’s post shows there is bound to be plenty of prior art out there 🙂

We’re currently working on a JMS-drived application, which is being used as an integration point between several systems. We’ve defined a standard exception handling process – checked exceptions for those errors that can be handled by the system, unchecked exceptions for those errors that cannot be handled. The unchecked exceptions are all allowed to bubble up, where they are caught at the top level (and the top level only).

Given that the entire system is driven by received messages, the “top” of our application is our MessageListener, which before today looked a little like this:



	

public void onMessage(Message m) {

try { process(m); } catch (Throwable t) { log(t); } }

As per our exception handling strategy, this is the only place where unchecked exceptions are caught. Today we started implementing the various sad-path scenarios, the first of which was “what happens if processing the message fails?”. In our case this scenario translates as process throwing an unchecked exception. This being our first scenario, we’ve assumed that any runtime exception is a transient error – possibly down to one of the systems we’re integrating with being down. As such we decided that we’ll want to attempt to process the message again.

The simplest way to re-try a message is to use a transactional JMS session and rollback the session, as this returns the message back on to the topic/queue – we’d then specify a maximum number of times a message can be retried. It also follows that when using a transactional session you need to commit the session if the message was successfully processed.

Adding the code to commit a transaction is straightforward – but we do have to expose the session inside the listener (we’re using a topic here for monitoring purposes):



	

public OurMessageListener(TopicSession topicSession) {

this.topicSession = topicSession; }

public void onMessage(Message m) {

try { process(m); //if we get here, we've processed the message topicSession.commit(); } catch (Throwable t) { log(t); } }

Adding the code to roll a session back is a bit more work:



	

public void onMessage(Message m) {

try { process(m); topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); } }

Great so far, but our next “sad-path” scenario is going to give us a little more trouble. What if we receive a message that we can’t translate into something meaningful? We don’t want to re-try the message, as we know we’re not going to be able to handle it later – the message is just plain bad. To handle this case, we separated out the message processing and had it throw a checked exception:



	

public void onMessage(Message m) {

try { Object command = new MessageHandler(m); process(command); topicSession.commit(); } catch (MessageInvalidException e) { log(t); //we don't want to retry the message topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); } }

This works as far as it goes, but up to this point I’ve been oversimplifying things a little. We’d abstracted our use of JMS behind two simple methods, which so far had been good enough for use in all our tests, and both client and server code:



	

public interface MessagingFacade {

void subscribe(MessageListener listener); void publish(String xmlToSend); }

The subscribe call hides all the JMS plumbing – including the creation of the session itself. If we want to pass our session into the message listener, we need to expose it from the MessagingFacade or create it ourself – either way we kind of defeat the object of the facade. If we don’t use the facade, we end up complicating much of our code.

The solution we came up with was to create a TransactionalMessagingListener like so:



	

public class TransactionalMessagingListener

implements MessageListener { public TransactionalMessagingListener( TopicSession topicSession, MessageListener delegate) { ... } public void onMessage(Message m) { try { delegate.onMessage(m) topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); }

}

Our underlying message listener is no longer the top of our system so doesn’t need to log throwable. Nor does it need to worry about the TopicSession, so becomes much simpler – we catch and log the checked exception related to message processing and let any unchecked exceptions bubble up to the TransactionalMessagingListener:



	

public void onMessage(Message m) {

try { Object command = new MessageHandler(m); process(command); } catch (MessageInvalidException e) { log(t); } }

And finally we change our MessagingFacade a little, making the subscribe method more specific by calling it subscribeWithTransaction, and wrapping the listener with our new TransactionalMessageListener:



	

public void subscribeWithTransaction(

MessageListener listener) { ... TransactionalMessageListener txnListener = new TransactionalMessageListener( topicSession, listener ); ... }

And there we have it. All the code is simple and testable – and not a dynamic proxy in sight (take that AOP nut-cases!). I still can’t help thinking there was a simpler way of handling this though…

I received an email a few days ago from a helpful reader informing me that he’d tried to post a comment but failed. Initially I was worried that Perl was mis-firing again, until I noticed the URL he was trying to post to. A while ago I de-cruftified this site’s URLs. By default Movable Type generates simplistic URL’s named using the entry ID (e.g. 000334.html). Not only is this not terribly forward thinking (what if I moved to PHP?) the URLs don’t really match the information’s structure.

What I did back then was first to remove all URL suffixes. Next, I moved the URL’s into a format based on months and dates – for example /2005/04/13 would be used by all posts made in 2005 on the 13th of April. This left me with nice URLs, but I neglected to handle the fact that I had over 200 posts using the old form. These were still generating hits, but were an old version of the page. When I changed the comment posting these pages started mis-firing.

The obvious solution was to set-up Apache redirection rules for all the posts. To make life easier, I added an easily grep-able HTML comment to all the individual post pages. Next, I ran a recusive find over the archive structure, grepping for the comment:


	

find . -type file -not -name "*.html" -print | xargs grep "ENTRY "

The -not negated what followed, excluding all files ending in .html. This wasn’t completely necessary but the HTML comment I put in wasn’t specific enough so I was getting a few false positives from the old HTML pages. After this query, I was left me with several hundred results like this:


	

./2005/04/13/this_is_a_post: ENTRY 000342

I fired up SubEhtaEdit, and a quick regexp later I had the following for each of the 300 or so matches:


	

RedirectMatch permanent 000342.html ./2005/04/13/this_is_a_post

This was then pasted into my .htaccess file, and I could safely delete the old HTML files, safe in the knowledge that any Google hits for the old pages would get redirected to the new, de-crufty URLs.

While I was at it, I knocked up a favicon, some rollovers for the sidebar and entry navigation, and enhanced the side navigation to show the last few recent posts and del.icio.us links.

In my (albeit limited) experience, failures attributed to development processes – be it Agile, XP, DSDM, Waterfall or whatever – fall into two groups. Firstly, that those involved assume the process itself can succeed without making it work – that because there is a process, you don’t have to be organized, disciplined or put the effort in. Secondly, that people abdicate the responsibility of thinking in favor of the process itself – that is to say that they blindly follow a particular process without at any stage engaging their brains. To paraphrase Douglas Bader – “rules are there for the guidance of the wise and the obedience of fools”.
Continue reading…