magpiebrain

Sam Newman's site, a Consultant at ThoughtWorks

Posts from the ‘Development’ category

In the first part, I’m just going to be covering the basics.

Conventions

Before we start, some coding conventions. Firstly, CamelCase for methods and variable names is out – for Ruby it’s lowercase and underscores all the way – it_is_this_way, itIsNotThisWay. Classes and Modules (of which more later) should start with an uppercase letter, but the files can start with a lowercase letter. The only break with this convention is in terms of class and module names – here you camel case should be used. Of course you can use CamelCase everywhere if you want, but it’ll be confusing when calling Ruby code written by other people.

Syntax

Some things you’ll notice straight off – brackets for methods with parameters are optional, so do_stuff(10, 20, 30) works, as does do_stuff 10, 20, 30. The current recommendation is to use paranthesis when you have arguments as there is a chance that Ruby 2 might break some method calls without brackets (I’d appreciate some clarification on this!).

Methods and if blocks use end to terminate rather than brackets:

[ruby]
def do_stuff

if something

elsif something_else

end
end
[/ruby]

Classes

Classes in Ruby are similar to classes in Java (we’ll deal with typing later). For example to define a Polygon class, you do the following:

[ruby]
class Polygon


end
[/ruby]

You can also subclass:

[ruby]
class Square < Polygon
[/ruby]

You implement constructors by defining an initialize method:

[ruby]
def initialize


end
[/ruby]

And call them using Polygon.new, Square.new etc.

Modules

You define modules using:

[ruby]
module Utils


end
[/ruby]

You use modules as mixins using the include keyword. For example imagine the following:

[ruby]
module Renderer

def draw_line(x,y)

end
end
[/ruby]

Your Square class can then include Renderer to use the draw_line method:

[ruby]
class Square < Polygon

include Renderer
end
[/ruby]

Some might argue you could get this using an Abstract base class – and you’d be right. However there are no limits to the number of mixins you can include, whereas you can only have one Abstract base class. Ruby provides several very useful mixins – for example if you define the = operator (we’ll be covering operators later), by including the comparable module you get <, <=, ==, >=, and > for free.

<h3 id="methods>Methods and visibility

As you’ve already seen, you define a method using def method_name – if you have parameters, you define it using def method_name(arg1,arg2)). Ruby does support optional parameters. In Java, you’ll often see code like this:

[java]
void doSomething(String name) {

doSomething(name, null);
}

void doSomething(String name, String description) {


}
[/java]

In Ruby you can do:

[ruby]
def do_something(name, description = nil)


end
[/ruby]

In Java, where you have protected, public, private and default visibility, you have protected, public, private in Ruby. private in Ruby is more like protected in Java however – it can be called from either the class it was defined in, or subclasses. private methods however can only be called when using self as the reciever, so if do_stuff is private:

[ruby]
self.do_stuff #Allowed
do_stuff #Not allowed
[/ruby]

Put more simply, a private method cannot be called in a static context.

Ruby’s protected methods can be called only in the class itsef or subclasses, but can be called without a reciever as well as with a reciever:

[ruby]
self.do_stuff #Allowed
do_stuff #Allowed
[/ruby]

You have two options when defining method visibility. You can use the visibility keywords to define private/protected/public regions:

[ruby]
protected

def this_is_protected

end

private

def this_is_a_private_method

end

def this_is_also_private

end
[/ruby]

You can also do it on a single line:

[ruby]
def this_is_protected


end

def this_is_a_private_method


end

def this_is_also_private


end

protected :this_is_protected
protected :this_is_a_private_method, :this_is_also_private
[/ruby]

Personally I prefer the first technique.

Next time we’ll be looking at operators, arrays, hashes and perhaps even the Ruby typing system.

  • Update 1: Fixed wording in the modules section
  • Update 2: Updated methods section thanks to feedback from Curt Hibbs
  • Update 3: Fixed typos in the classes and methods sections (thanks Gavri Fernandez)
  • Update 4: Fixed typos (thanks Eliot and Alex) and clarified use of camel case (thanks Jon ).
  • Update 5 21st April 2006: Fixed up some missing text due to WordPress import. Used new syntax highlighting for code samples

Two of the three bugs on my watch list have in recent days been makred for inclusion in Mustang (Java 6). RFE 4057701 entitled “Need way to find free disk space” has been in since virtually the year dot, and was originally dropped from the first NIO JSR:

Resolution of this request is considered a high priority for Mustang.
Active development on this feature has been in progress for the past
several months. I expect that we will complete this work within the
next couple of months. These methods will be in Mustang-beta.

I’m trying to work how how this feature required “several months” of development, but at least it’s a “high priority”. Lets gloss over the fact that even if I am programming in Java by the time Java 6 is out it’ll be another year or two before my clients will be using it.
<!-more->
Less important, but still a “woho!” moment for me, is the inclusion of RFE 4726365 – “Java 2D to support LCD optimized anti-aliased text (sub-pixel resolution)”.

The quality of Java2D text antialiasing leaves a lot to be
desired in comparison to OS naitve antialiasing, especially
ClearType on Windows.

Java2D uses gray scale antialiasing. At point sizes below a
certain threshold (around 14pt it seems) this just does not
look very good, with text appearing ‘lumpy’ and uneven.
Either the way antialiasing interacts with font hinting
needs to be improved, or there should be a settable
rendering hint so that text below a certain size is not
antialiased. In Windows standard text antialiasing, this
works well. Larger font sizes are antialiased, where the
most benefit is seen, and smaller font sizes are untouched,
making them legible.

Decent anti-aliasing is one of the key features in making Java UI’s fit in better with native applications, and the fact that the UI will pick this up automatically should mean existing applications will look better for free, at least where LCD’s are concerned. Now I hope someone looks at RFE 4109888

When Java first arrived, the initial hype was all about “write once, run anywhere”. It’s language a mess of compromises born out of it’s heritage as a language for embedded machines and a desire to keep C++ programmers happy. Once people got over the novelty of inherently portable code, the attention then fell (initially favourably) on Applets, followed by protests against the non-portable nature of AWT. Penetration outside of the web was minor.
Continue reading…

It was with some interest I spotted a positive mention of something called Context IoC (a new type of IoC apparently) on Dion Hinchcliffe’s Blog. The while topic really bores me right now as IoC stopped being something fancy a long time ago, and to me it’s now nothing more than “calling a proper constructor”. I investigated further as Dion normally writes very sensibly, does some very nice diagrams and has impeccable taste.

Contextual IoC is outlined in a ServerSide piece by Sony Mathew. It’s not surprising I missed it the first time round as I stopped reading the ServerSide a while back. It turned out I really wasn’t missing much. He starts off on shaky ground:

IOC seems to be at odds with the fundamental paradigm of object encapsulation. The concept of upstream propagation of dependencies is indeed a contradiction to encapsulation. Encapsulation would dictate that a caller should know nothing of its helper objects and how it functions. What the helper objects do and need is their business as long as they adhere to their interface – they can grab what they need directly whether it be looking up EJBs, database connections, queue connections or opening sockets, files etc..

He almost makes a point here, but ends up saying “Objects should interface with things with interfaces”. To those of us who create interfaces to reflect the role a concrete class has in a system, this is nothing new. Quite who this is “at odds with the fundamental paradigm of object encapsulation” I’m not quote sure.

Things then look up a bit:

The benefits of IOC are that objects become highly focused in its functionality, highly reusable, and most of all highly unit testable.

That I agree with. Then we’re off again:

The problem with exposing dependencies via constructors or Java bean properties in general is that it clutters an object and overwhelms the real functional purposes of the constructor and Java bean properties.

I agree with exposing setters purely to provide dependencies – but constructors? At this point it’s clear that Sony and I disagree on what a constructor is – apparently creating an object with the tools it needs to do its job is not what a constructor is for. At this point we’re waiting for Sony’s solution – and the solution is Contextual IoC.

Let’s look at Sony’s example of this:


public class PoolGame {

  public interface Context {
    public String getApplicationProperty(String key);
    public void draw(Point point, Drawable drawable);
    public void savePoolGame(String name,
      PoolGameState poolGameState);
    public PoolGameState findPoolGame(String name);
  }

  public PoolGame(Context cxt) {
    this.context = cxt;
  }

  ...
}

Right, so what’s happened here is that we’ve taken dependencies (in the form of method calls – the PoolGame requires and folded them into an inner interface. Question 1 – how is this different from passing in the roles as separate interfaces? It seems to me with have the need for 3 distinct roles, something along the lines of PoolRepository (for savePoolGame() and findPoolGame()), Display (for draw()) and ApplicationConfig (for getApplicationProperty()). My version of the code would be:


public class PoolGame {
  public PoolGame(
    PoolRepository rep,
    Display display,
    ApplicationConfig config) {
    ...
  }
}

So Sony’s code is dependent on having implementations of four methods, my code as a dependency on a implementations of three roles in the system. Lets look at the “benefits” of this approach:

The Context interface removes the clutter of exposing dependencies via constructors or Java bean properties

Does my constructor look any more cluttered than Sony’s inner Context interface? I know my object is going to be easy to create too – how easy is it to create Sony’s? Will it look as simple as constructing my version? So I dispute that benefit. Next:

It provides better clarity by organizing an object’s dependencies into a single personalized interface unique to that class. The PoolGame.Context interface is unique to the PoolGame class.

And that’s a benefit how? It’s a simple matter to see where my roles are used in the system – with contextual IoC you have to look for Context implementations that use things like PoolRepository, which would still need to exist. You’d still need to create things that perform storage and queries of games, things that display the games, or access application properties. But with contextual IoC you also have to create context objects to wrap them up.

Yes, you heard it here first folks, but Sun has finally woken up and decided to include a micro IoC container in the next version of Java! Using a special keyword, the container lets you create an object that can be used straight away, by passing all dependencies in.
Continue reading…

I like unchecked exceptions. Here’s why – I like to know something went wrong without having to explicitly handle every possible error. I like checked exceptions. Here’s why – when I decide that an error should be handled in my system, I want to force it to be handled as a defensive measure. We can argue back and forth about the merits of checked exceptions – Java chose it’s path, .NET another, some people don’t like exceptions at all – but that is a discussion for another place. What I want to look at is how these two types of exception get used on our current project.

As I mentioned before, we’re using unchecked exceptions for error conditions we cannot handle, and checked exceptions we can handle. Unchecked exceptions are allowed to bubble up to the top level, where they are logged.

Checked exceptions are never logged and rethrown – if we had to rethrow the exception because we cannot properly handle it, it means it really should be unchecked, and we should allow the top level to catch and log it. For example we don’t have examples of the following code in our code base:



	

public void doSomething() {

try { doSomeOtherMethod(); } catch (SomeCheckedException e) { log(e); throw new UncheckedException(e); } }

Instead, if we can’t handle doSomeOtherMethod’s exception, it should be unchecked and we should rely on the top-level to catch and log it:



	

public void doSomething() {

... doSomeOtherMethod(); .. }

If we can handle the error generated by doSomeOtherMethod, then the code should be more clearly written as:



	

public void doSomething() {

try { doSomeOtherMethod(); } catch (SomeCheckedException e) { log("Error occurred, but we'll handle it<a href="http://www.magpiebrain.com/archives/2003/11/21/proper_exception_handling" <5>>, e); handleErrorState(e); } }

We’ve handled it immediately, in place, and haven’t propagated any exceptions.

There are of course exceptions to this rule (no pun intended). In the case of third party API’s, we may find that the exceptions they throw don’t match the context in which we are using them. Our JMS-based system assumes messages will be Java objects (de)serialized using XStream and sent as XML. The XStream code is pretty simple:



	

XStream xstream = new XStream(); Object anObject = xstream.fromXML(xmlToDeserialize);

XStream throws unchecked exceptions only – not that you could work that out from the Javadoc (I’m all in favour of lightly documented code, but external API’s should really document the unchecked exceptions they can explicitly throw). For us, we want to be able to handle the fact we gave our deserializer some dodgy XML – which implies we need a checked exception. To handle this, we explicitly catch the StreamException, but not Throwable:



	

XStream xstream = new XStream(); try {

Object anObject = xstream.fromXML(xmlToDeserialize); } catch(StreamException e) { throw new DeserializerException(e); }

The reason we don’t catch all exceptions, is that we are happy we can handle the outcome if a StreamException can be thrown – we understand that scenario. If XStream started throwing NullPointerException’s however, it’s a brand new scenario we haven’t planned for – we have to assume we can’t handle that one. Put another way, what if we were using a new version of the XStream library that had a bug in it? By assuming any unchecked exception thrown by XStream was down to bad XML, we could end up throwing away lots of valid data.

and Catching uncaught exceptions )

Ian Griffiths has a similar take on exception handling, with more emphasis on application tiers/layers. Also note that I don’t claim that I as a person (or the development team as a whole) came up with this approach – as Ian’s post shows there is bound to be plenty of prior art out there 🙂

We’re currently working on a JMS-drived application, which is being used as an integration point between several systems. We’ve defined a standard exception handling process – checked exceptions for those errors that can be handled by the system, unchecked exceptions for those errors that cannot be handled. The unchecked exceptions are all allowed to bubble up, where they are caught at the top level (and the top level only).

Given that the entire system is driven by received messages, the “top” of our application is our MessageListener, which before today looked a little like this:



	

public void onMessage(Message m) {

try { process(m); } catch (Throwable t) { log(t); } }

As per our exception handling strategy, this is the only place where unchecked exceptions are caught. Today we started implementing the various sad-path scenarios, the first of which was “what happens if processing the message fails?”. In our case this scenario translates as process throwing an unchecked exception. This being our first scenario, we’ve assumed that any runtime exception is a transient error – possibly down to one of the systems we’re integrating with being down. As such we decided that we’ll want to attempt to process the message again.

The simplest way to re-try a message is to use a transactional JMS session and rollback the session, as this returns the message back on to the topic/queue – we’d then specify a maximum number of times a message can be retried. It also follows that when using a transactional session you need to commit the session if the message was successfully processed.

Adding the code to commit a transaction is straightforward – but we do have to expose the session inside the listener (we’re using a topic here for monitoring purposes):



	

public OurMessageListener(TopicSession topicSession) {

this.topicSession = topicSession; }

public void onMessage(Message m) {

try { process(m); //if we get here, we've processed the message topicSession.commit(); } catch (Throwable t) { log(t); } }

Adding the code to roll a session back is a bit more work:



	

public void onMessage(Message m) {

try { process(m); topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); } }

Great so far, but our next “sad-path” scenario is going to give us a little more trouble. What if we receive a message that we can’t translate into something meaningful? We don’t want to re-try the message, as we know we’re not going to be able to handle it later – the message is just plain bad. To handle this case, we separated out the message processing and had it throw a checked exception:



	

public void onMessage(Message m) {

try { Object command = new MessageHandler(m); process(command); topicSession.commit(); } catch (MessageInvalidException e) { log(t); //we don't want to retry the message topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); } }

This works as far as it goes, but up to this point I’ve been oversimplifying things a little. We’d abstracted our use of JMS behind two simple methods, which so far had been good enough for use in all our tests, and both client and server code:



	

public interface MessagingFacade {

void subscribe(MessageListener listener); void publish(String xmlToSend); }

The subscribe call hides all the JMS plumbing – including the creation of the session itself. If we want to pass our session into the message listener, we need to expose it from the MessagingFacade or create it ourself – either way we kind of defeat the object of the facade. If we don’t use the facade, we end up complicating much of our code.

The solution we came up with was to create a TransactionalMessagingListener like so:



	

public class TransactionalMessagingListener

implements MessageListener { public TransactionalMessagingListener( TopicSession topicSession, MessageListener delegate) { ... } public void onMessage(Message m) { try { delegate.onMessage(m) topicSession.commit(); } catch (Throwable t) { log(t); topicSession.rollback(); }

}

Our underlying message listener is no longer the top of our system so doesn’t need to log throwable. Nor does it need to worry about the TopicSession, so becomes much simpler – we catch and log the checked exception related to message processing and let any unchecked exceptions bubble up to the TransactionalMessagingListener:



	

public void onMessage(Message m) {

try { Object command = new MessageHandler(m); process(command); } catch (MessageInvalidException e) { log(t); } }

And finally we change our MessagingFacade a little, making the subscribe method more specific by calling it subscribeWithTransaction, and wrapping the listener with our new TransactionalMessageListener:



	

public void subscribeWithTransaction(

MessageListener listener) { ... TransactionalMessageListener txnListener = new TransactionalMessageListener( topicSession, listener ); ... }

And there we have it. All the code is simple and testable – and not a dynamic proxy in sight (take that AOP nut-cases!). I still can’t help thinking there was a simpler way of handling this though…

One of the hardest things about making the transition from Windows to OSX was letting TopStyle go. Quite simply it was about the only thing that made struggling with browser inconsistencies bearable, and made playing around with CSS a pleasure. I have never been happy with the OSX replacement I got for it – CSSEdit, whilst it handled previews nicely didn’t do keyword completion as well as TopStyle, nor did it have downright handy features like a colour chooser.

I tried WestCiv’s Style Master about six months ago when I was still Windows bound, but it didn’t work as well for me as TopStyle did. However the new version looks like it might have a killer feature which is encouraging me to give it another go. Style Master 4’s design pane allows you to click on a part of the preview view, and immediately edit the styles which affect that display. It also shows the elements position in the overall page structure which will make debugging cascading rules much easier. The fact that it works on both Windows and OSX is just a bonus. Anyway, with an impending heavy session working on the new site design due this holiday weekend, I’ll try and post a review soon.

JSR 270 covers the proposal for the J2SE 6.0 release. Unlike most other JSR’s, it is not defining any new API’s, rather it is defining which JSR’s are being considered for inclusion in what will no-doubt be called Java 6. Looking down the list I was struggling to find much on the list I actually wanted…

<a href="http://jcp.org/en/jsr/detail?id=105

JSR 105: XML Digital Signature APIs>

Got to admit, this seems inoffensive. It implements a W3C spec from 2002, which given how fast these things move is about par for the course. It’ll probably be handy for people doing web services and such, but does it need to be in the core JDK?

<a href="http://jcp.org/en/jsr/detail?id=268

JSR 268: JavaTM Smart Card I/O API>

So, hands up those of us who are programming for Smart Cards. Now keep your hands up if the lack of inclusion of the Smart Card I/O API into J2SE is causing you a problem. While I don’t dispute we’ll see more Smart Cards about in the future, is this really core functionality? More core than, oh I don’‘t know, being able to work out how much disk space is free?

<a href="http://jcp.org/en/jsr/detail?id=199

JSR 199: JavaTM Compiler API>

Something interesting, and what in my mind new releases of J2SE should really be about.

While the existing command-line versions of compiler receive their inputs from the file systems and deposit their outputs there, reporting errors in a single output stream, the new compiler API will allow a compiler to interact with an abstraction of the file system. This abstraction will likely be provided by an extension of the NIO facilities in Tiger (1.5), and allow users to provide source and class files (and paths) to the compiler in the file system, in jar files, or in memory, and allowing the compiler to deposit its output similarly. Diagnostics will be returned from a compiler as structured data, with both pre- and post-localization messages available.

In addition, the new API should provide a facility for a compiler to report dependency information among compilation units. Such information can assist an integrated development environment in reducing the scope of future recompilations.

Sounds good to me, especially the bit about the compiler reporting dependencies. No complaints here.

<a href="http://jcp.org/en/jsr/detail?id=202

JSR 202: JavaTM Class File Specification Update>

I’m a little confused about this – I thought these changes were required for Tiger, and as such would already be included – JSR-heads feel free to put be straight on this.

<a href="http://jcp.org/en/jsr/detail?id=221

JSR 221: JDBCTM 4.0 API Specification>

This version focuses on making API access simpler, and improving connection management, all while being backwards compatible. No complaints.

<a href="http://jcp.org/en/jsr/detail?id=222

JSR 222: JavaTM Architecture for XML Binding (JAXB) 2.0>

This has stumped me a bit. JAXB is a nice idea, but the generated classes suck and it has a whole load of dependencies. I suppose if JAXB is the way to go, then including it allows it to compete with the (nicer) XSD functionality in .NET. As for how much space adding all the JAXB dependencies will add to the overall JDK size…

<a href="http://jcp.org/en/jsr/detail?id=223

JSR 223: Scripting for the JavaTM Platform>

Fear not, this is not the inclusion of Groovy (or, horror of horrors BeanShell) in J2SE, it is actually about exposing Java objects to scripting languages. PHP will be supported by the specification (although the capacity will be there to support other languages). Whilst the JSR talks mostly about web-scripting, it should work equally well within a standard JVM. I don’t object to the concept, and am neutral to it’s inclusion in the core J2SE.

<a href="http://jcp.org/en/jsr/detail?id=224

JSR 224: JavaTM API for XML-Based RPC (JAX-RPC) 2.0>

Another focus on supporting ease of development, as well as supporting the newer W3C specs, and will support non-HTTP transports, so I guess it’s no bad thing – if the Java->WSDL->Java stuff gets improved and simplified too I’m happy, but again it does seem like this is being included to compete feature to feature with .NET rather than focusing on what most Java developers want in the core.

<a href="http://jcp.org/en/jsr/detail?id=260

JSR 260: JavadocTM Tag Technology Update>

I use Javadoc so little, and when I do I rarely use any of it’s more advanced features. As such I think it unlikely I’ll be using any of these new tags:

* categorization of methods and fields by their target use

  • semantical index of classes and packages
  • distinction of static, factory, deprecated methods from ordinary methods
  • distinction of property accessors from ordinary methods
  • combining and splitting information into views
  • embedding of examples, common use-cases along with Javadoc

But I know someone has been waiting for this stuff for ages, right?

<a href="http://jcp.org/en/jsr/detail?id=269

JSR 269: Pluggable Annotation Processing API>

Finally we end up with this JSR, which on first glance appears to improve the handling of annotations to make it easier to build bespoke annotation handlers, which I believe brings Java closer to .NET’s annotations.

Summary

No real “hot damn, I want me one of those” moments here. Whilst it might be unfair to judge the proposed inclusions this early on (the JSR has only just been released for review) a large amount of the inclusions seem to be there to compete with similar functionality already in the .NET framework. Perhaps those of us who’ll never use this stuff won’t begrudge the larger download and the increasingly bloated feel of the JDK, or perhaps I’ll be completely wrong, and in two years from now we’ll all be developing Smartcard distributed web-services systems… J2SE 5 focused on new languages features and the improved concurrency utilities (incidentally, Doug Lea is all over allot of the above JSR’s), but so far J2SE 6.0 doesn’t seem to have as much focus. But if it can deliver on it’s promise of a better developer experience when it comes to web-services, then all those extra megabytes might be worth it.

I’ve seen the diagram below used to describe “Agile Development”. I found it to be quite a good overview of a typical agile development process.

agile_software_development_cycle.gif
Continue reading…