Archive for the ‘Java’ Category

The case against Inversion of Control

Thursday, April 3rd, 2008

Inversion of Control is a refactoring that rose to fame with the implementations of the likes of Spring or PicoContainer. This is a quite misunderstood feature of software design (notably its implications), but a rather often used one - mainly through the use of the above cited frameworks. This article is an attempt at debunking the myth and presenting you with the hard, crude reality.

Not a pattern, more a refactoring

Of course, I would love to be able to antagonise refactorings and patterns, but it is not that simple and the separation between the two is easily crossed.

The refactoring we are using with IoC comes along the following scenario: we have a working system, but somehow we would like to reduce dependencies between some of its components; so we decide to refactor it and feed the dependencies to the components instead of having them explicitly request them. Under this light, I have to admit that I prefer the term Dependency Injection to Inversion of Control!

To realise this refactoring, we use the Builder pattern: a component knows how to built another component so that it works correctly without having to know how to build itself.

So, now we are on the same level about IoC, here is what is really bugging me with it.

Replaces compile-time validation with run-time errors

The main issue for me, is that you cannot verify your dependencies when you are actually coding. You need to run your application and deal with a hundred configuration errors before you can even start testing your feature.

Because all the wiring of your classes happen at runtime, you will not know until you are actually using a feature if it is correctly configured or not (with all its dependencies, ad infinitum). But this shouldn’t really scare you because you have tests for everything in your system, don’t you?

Introduces more indirection and delegation

To create an efficient IoC refactoring, you need to break down your dependency into an interface (depend on things less likely to change) and it’s implementation. That’s one level of indirection.

Now when you actually configure your dependency in you XML file, you don’t use actual objects, but names… text… that’s another level of indirection!

In order to manage all this indirection, your IoC container will use intermediary objects, proxies, to actually wire your classes together.

And that’s were it hurts a second time: when debugging, you have to traverse layers upon layers of classes and methods to understand where things go wrong! And don’t even get me started on actually trying to follow the path of a piece of code at development time!!

Hides dependencies but doesn’t reduce them

After all this meddling with your class’ instances, said class remains dependent on other objects but you have now lost aggregation dependency in the midst of the framework; that is, you don’t really know any more which dependencies the class needs to do its job (e.g. a data access objects) and which are just here for delegating (e.g. listeners).

Worse, if you follow the “best practices” enunciated in the documentation of your IoC container, it is very likely that you have now introduced dependencies to the framework in many places of your system: annotations? interceptors? direct use of the IoC’s object factory?

Refactoring, not central piece of architecture (bis repetita)

As a conclusion, I would like to insist that IoC should really be a refactoring for specific parts of your system and that you shouldn’t try to have everything dependency-injected, that’s bad for your system, and that’s bad for your sanity!

These days, you can be dubbed an Architect (notice the capital A, as before) very easily: just move every single instanciation into a IoC container and you get this very enterprisey architecture that’s as easy to maintain as it was before, with the addition of all the indirection… it makes you look good when new developers just stare blankly at your 2Mb XML configuration file.

Nota bene: I really have to thank Jason for motivating me to write up this post that I drafted earlier in January!

Sphere: Related Content

Depending on abstractions: a bad idea?

Tuesday, March 4th, 2008

I have seen my share of code where everything you are dealing with is an interface. And I have seen my share of bugs in this kind of code. While it is proven practice to depend on abstraction, this is also a source of many common mistakes.

nota bene: examples use the Java syntax, but as far as I know, the issue is the same in C#, C++… (name your mainstream language)

The culprit

Consider the following interface declaration:

	public interface Executable {

		public void init(Map config);

		public String execute();

	}

Quite easy to understand, right? Any class implementing this interface can accept initialisation parameters with the init(Map) method and will be executed with the execute() method.

Wrong! This interface doesn’t tell you anything. Essentially, it would mean exactly the same if it was written as such:

	public interface XYZ {

		public void bbb(Map x);

		public String aaaa();

	}

The problem with abstractions

The contract is fuzzy

The interfaces’ contract is really loose:

  • parameters are unspecified: nothing prepevents the user to pass null or strange values to methods; in our example, what is the type of objects in our Map parameter (it can help if we are using Java5+ - e.g. Map - but it would not be a panacea)?
  • return is unspecified: should we expect nulls? is there a domain for the values (X, Y or Z)?
  • call order is unspecified: when you are implementing the interface, what guarantees you that init() will be called before aaaa()?

You can’t test an abstraction

I hate to state the bloody obvious, but, unless you provide an implementation (be it a mock one), the best you can test with an interface is its existence in the system!

Your IDE can’t help you

When following code in your IDE, each time you encounter an interface, you have to guess what is the most likely implementation to be provided at that moment.If anything, you could still run your application in debug mode to find out, but you might not have that luxury… I know that feeling! :)

Dealing with it

The issue is very clear with interfaces, but it is exactly the same with abstract or overridden classes!

Of course, I wouldn’t give away the abstraction capabilities of an OO language just because that language has a poor contract management.

The only way of dealing with it for the moment is to make your code depending on interfaces completely foolproof: expect exceptions to be thrown, expect parameters not to accept nulls, expect return values to be null or completely out of your domain… and try to mitigate the risk that the method execution order might not be respected (when you are implementing an interface).

The way forward would be to implement a contract language for interfaces (OCL anyone?)… as a matter of fact I kind of see a way of doing it in Java. I need to put more thought into it though, any suggestion welcome!

Sphere: Related Content

Oracle Application Server - SNAFU

Monday, February 4th, 2008

A long time ago, I had had to deploy a J2EE application on Oracle Application Server/OC4J… I can’t remember which version it was but it doesn’t really matter because the issues I had then still exist today: the bloody thing doesn’t work out of the box, deals with my class loading scheme like it deals with a bad ho’, and the administration tooling provision looks like it could have been built by our friend Neanderthal…

Server configuration

Now I might be a bit overreacting, but what interest is there in releasing an application server onto which you can barely deploy an application? I keep running into “java.lang.OutOfMemoryError: PermGen space”… at least these errors are pretty well known: they happen when the JVM can’t allocate enough space for objects that won’t be garbage collected.

They are also quite easy to circumvent by adding -XX:MaxPermSize=256m to your java command (default is 8m, and you can increase this number as much as you need to and your box - I found that 256m was usually the right figure).

To do that in Oracle Application Server you can go into the EnterpriseManager and in the Administration tab of your OC4J instance, choose “Server Properties” and add a new line with the -XX:MaxPermSize option… done!
If you have OC4J as a standalone instance, you can simply edit the bin/oc4j (or oc4j.cmd on Windows) file and add the -XX:MaxPermSize option directly after the java command.

But you would think that the Oracle guys would have read that kind of documentation (Tuning Garbage Collection with the 5.0 Java[tm] Virtual Machine) and implemented sensible parameters so that the server works as best as possible when you start it for the first time, wouldn’t you?

Class loading issues

I am dreaming that one day, some bloke will build a NNWAS (No-Nonsense Web Application Server), where you would be able to deploy you web application without having to worry about the interactions between the class loading tree of your application and that of the container you deploy it upon.

The issue I am talking about is better described in this excellent article (http://www.qos.ch/logging/classloader.jsp). In a few words, the issue arises when one of your libraries (let’s call it high-level library, because it is often a library of abstractions) is attempting to load classes dynamically from a dependent library (the low-level library, containing the implementations), but the JVM has loaded a different copy of your high-level library in a classloader that is a parent of your application’s classloader.

This is depicted in the following diagram: The Server’s High-level library gets loaded first and when it requests the loading of a class in the Low-level library, the class loading mechanism is looking up in the classloaders tree and can’t find it.

Class loading issues diagram

One solution is to add the low-level library in the shared libraries of the server (in effect, pushing it up the tree of classloaders) or removing the high-level library from the server’s installation (in effect, promoting the application’s equivalent library to the right level of classloading in order to reach the low-level library), with all the possible risks that this strategy might incur.

Bad tooling

Finally, I would like to automatically deploy my application on my OC4J container after each Maven build, after each change in the CVS and overnight…
To that effect, Oracle provides a set of Ant tasks to deploy/undeploy the webapp, publish the shared libraries, create DataSources and connection pools, and even restart the server.

However, I wonder why those tools have been developed as Ant tasks if you can’t use them within a build system: the build will fail if you attempt to undeploy a webapp without it being deployed first (same for shared libraries and connection pools), that means you can’t use the same script to install fresh as you would to repeat install!
That also means that if, for some reason (invalid web.xml, missing dependency…), your build breaks during the deployment phase, you can’t simply re-run you build script once the issue is fixed, you also have to put the server in a stable state again manually.
Nearly a deal breaker for continuous integration, but moreover, really annoying when you are in the ramp-up phase of a project!


Situation Normal: All Fu##ed Up

Sphere: Related Content

Fun with Java generics

Thursday, January 31st, 2008

Playing around with Spring beans and Hibernate, combined with inheritance of my domain and service classes (not an easy ride, I can tell you), I was trying to implement a generic service to cater for CRUD operations while, at the same time, keeping Spring’s autowiring facilities at bay…

So far, no luck. But in the process of implementing my generic class, I faced a problem that took me a while to circumvent. Consider the following classes:


  class A<T> {

    // reference to an instance of the B class
    public B b;

    public void doOp() {
      b.callOp(T.class);
    }
  }

  class B {

    /**
    * this method takes a class as parameter
    */
    public void callOp(Class c) {
      // …
    }
  }

The code that is problematic (as you might have guessed with the use of italics) is the following:


      b.callOp(T.class);

Well… it just doesn’t compile!
Apparently, there is no language feature to simply refer to the parameterized type class.

However, there’s still a solution:


(Class<T>) ((ParameterizedType) getClass().getGenericSuperclass()).getActualTypeArguments()[0];

As documented here: ParameterizedType.getActualTypeArguments()

Thought I would share the goodness…

Sphere: Related Content

Cartoonesque ProgressBar

Friday, August 10th, 2007

I second Romain Guy when he says that Progress Bars are boring: let’s part with this idea that a progress bar has to be a simple long rectangle that is half full or half empty… I am sure that we can find better paradigms, best suited for each case:

  • copy of a bunch of files: have a decreasing stack of files on the left and an increasing one on the right
  • CD burning: have a CD on fire that becomes just a pile of ashes when finished

Here’s my weak attempt at creating a Cartoon-like Progress Bar (you will need Java installed to run that), but it probably took me more time to configure the JNLP than to actually create the animation…

webstart

By the way, I greatly recommend Romain’s book Filthy Rich Clients that I have added to my Safari bookshelf and that I think I should finish to read, if you consider my attempt at animating Swing… :)

Sphere: Related Content

Unchecked warnings and type erasure with Java Generics

Wednesday, March 14th, 2007

I am currently cleaning a whole code base from all its warnings and I kept stumbling upon a few warnings related to the use of generics. And all I can find on Google is people telling each other to use @SuppressWarnings(”unchecked”)…

For a start, I am not a big fan of annotation (I will probably post some day on that), but this is more like sidestepping the problem and acquiring bad practices for dealing with warnings.

2 different warnings

Consider the following example encountered while using Apache Log4J:

Enumeration<Appender> e = log.getAllAppenders();

This will generate the following warning: “Type safety: The expression of type Enumeration needs unchecked conversion to conform to Enumeration”

OK, fine! so let’s cast it to the proper type:

Enumeration<Appender> e = (Enumeration<Appender>)log.getAllAppenders();

Well, the result is not quite what we expected in that it’s still generating a warning: “Type safety: The cast from Enumeration to Enumeration is actually checking against the erased type Enumeration”.

Even though we are pretty sure this should never generate any ClassCastException, these warnings are just plain annoying for code quality…

More on erased types

The implementation of Generics in Java 1.5 came with a nice advantage: you could code in fancy spanking-new 1.5 generics and still generate classes that were compatible with previous versions of Java!

They did that by enforcing the type safety with generics only at compilation time, but otherwise generate code compatible with Java 1.2+ by removing any parameterised type from the compiled class.

In effect, using generics it is now easier to code type-safe classes, but the checking is not done at runtime. If you find a way to feed your application with instances that do not comply with the intended generics behaviour (in our example, having an Enumeration of, say, String instead of Appender), this will certainly result in a nice ClassCastException!

And that’s the way it is implemented in Java, full stop! No discussion possible (until they decide to cut with previous versions of Java). So why have a warning at all?

The solution

Actually, there is a very simple answer to this (apart from the fact that future versions might not provide backward compatibility)…

It is obvious we are trying to get an Enumeration of Appender instances in order to apply some process on each of them; …but wait!

Appender is an interface… (that’s what got me on the tracks for the solution!)

What you are actually getting is an Enumeration on implementations of Appender; that is, any implementation possible!!!

So really, the code logically needs to be written as follows:

Enumeration<? extends Appender> e = (Enumeration<? extends Appender>)log.getAllAppenders();

That’s right! we want to be able to deal generically with any subtype of Appender… seems soooo easy now with hindsight, isn’t it?

Sphere: Related Content