Archive for the ‘Web/Tech’ Category

(Ab)Use of the C-word (cloud)

Friday, May 15th, 2009

The more cloud architectures gather buzz, the more I am seeing all sorts of misuse of the c-word. From describing grid computing to confusing it with SaaS, people seem to see clouds everywhere, even in the clearest of skies!

The “Palme d’Or” should probably be given to a business acquaintance who’s been able to use the word 3 times in the same 5mns presentation for 3 different meanings:

  • the Spring cloud === the Spring context
  • the ActiveMQ cloud === the ActiveMQ management of queues
  • the VM cloud === the in-memory execution of a process

You’ve got to love them clouds!

I wonder, even as I am myself piloting in the storm, how long it will be before I get really tired of the c-word!

Sphere: Related Content

The case against Inversion of Control

Thursday, April 3rd, 2008

Inversion of Control is a refactoring that rose to fame with the implementations of the likes of Spring or PicoContainer. This is a quite misunderstood feature of software design (notably its implications), but a rather often used one - mainly through the use of the above cited frameworks. This article is an attempt at debunking the myth and presenting you with the hard, crude reality.

Not a pattern, more a refactoring

Of course, I would love to be able to antagonise refactorings and patterns, but it is not that simple and the separation between the two is easily crossed.

The refactoring we are using with IoC comes along the following scenario: we have a working system, but somehow we would like to reduce dependencies between some of its components; so we decide to refactor it and feed the dependencies to the components instead of having them explicitly request them. Under this light, I have to admit that I prefer the term Dependency Injection to Inversion of Control!

To realise this refactoring, we use the Builder pattern: a component knows how to built another component so that it works correctly without having to know how to build itself.

So, now we are on the same level about IoC, here is what is really bugging me with it.

Replaces compile-time validation with run-time errors

The main issue for me, is that you cannot verify your dependencies when you are actually coding. You need to run your application and deal with a hundred configuration errors before you can even start testing your feature.

Because all the wiring of your classes happen at runtime, you will not know until you are actually using a feature if it is correctly configured or not (with all its dependencies, ad infinitum). But this shouldn’t really scare you because you have tests for everything in your system, don’t you?

Introduces more indirection and delegation

To create an efficient IoC refactoring, you need to break down your dependency into an interface (depend on things less likely to change) and it’s implementation. That’s one level of indirection.

Now when you actually configure your dependency in you XML file, you don’t use actual objects, but names… text… that’s another level of indirection!

In order to manage all this indirection, your IoC container will use intermediary objects, proxies, to actually wire your classes together.

And that’s were it hurts a second time: when debugging, you have to traverse layers upon layers of classes and methods to understand where things go wrong! And don’t even get me started on actually trying to follow the path of a piece of code at development time!!

Hides dependencies but doesn’t reduce them

After all this meddling with your class’ instances, said class remains dependent on other objects but you have now lost aggregation dependency in the midst of the framework; that is, you don’t really know any more which dependencies the class needs to do its job (e.g. a data access objects) and which are just here for delegating (e.g. listeners).

Worse, if you follow the “best practices” enunciated in the documentation of your IoC container, it is very likely that you have now introduced dependencies to the framework in many places of your system: annotations? interceptors? direct use of the IoC’s object factory?

Refactoring, not central piece of architecture (bis repetita)

As a conclusion, I would like to insist that IoC should really be a refactoring for specific parts of your system and that you shouldn’t try to have everything dependency-injected, that’s bad for your system, and that’s bad for your sanity!

These days, you can be dubbed an Architect (notice the capital A, as before) very easily: just move every single instanciation into a IoC container and you get this very enterprisey architecture that’s as easy to maintain as it was before, with the addition of all the indirection… it makes you look good when new developers just stare blankly at your 2Mb XML configuration file.

Nota bene: I really have to thank Jason for motivating me to write up this post that I drafted earlier in January!

Sphere: Related Content

Depending on abstractions: a bad idea?

Tuesday, March 4th, 2008

I have seen my share of code where everything you are dealing with is an interface. And I have seen my share of bugs in this kind of code. While it is proven practice to depend on abstraction, this is also a source of many common mistakes.

nota bene: examples use the Java syntax, but as far as I know, the issue is the same in C#, C++… (name your mainstream language)

The culprit

Consider the following interface declaration:

	public interface Executable {

		public void init(Map config);

		public String execute();

	}

Quite easy to understand, right? Any class implementing this interface can accept initialisation parameters with the init(Map) method and will be executed with the execute() method.

Wrong! This interface doesn’t tell you anything. Essentially, it would mean exactly the same if it was written as such:

	public interface XYZ {

		public void bbb(Map x);

		public String aaaa();

	}

The problem with abstractions

The contract is fuzzy

The interfaces’ contract is really loose:

  • parameters are unspecified: nothing prepevents the user to pass null or strange values to methods; in our example, what is the type of objects in our Map parameter (it can help if we are using Java5+ - e.g. Map - but it would not be a panacea)?
  • return is unspecified: should we expect nulls? is there a domain for the values (X, Y or Z)?
  • call order is unspecified: when you are implementing the interface, what guarantees you that init() will be called before aaaa()?

You can’t test an abstraction

I hate to state the bloody obvious, but, unless you provide an implementation (be it a mock one), the best you can test with an interface is its existence in the system!

Your IDE can’t help you

When following code in your IDE, each time you encounter an interface, you have to guess what is the most likely implementation to be provided at that moment.If anything, you could still run your application in debug mode to find out, but you might not have that luxury… I know that feeling! :)

Dealing with it

The issue is very clear with interfaces, but it is exactly the same with abstract or overridden classes!

Of course, I wouldn’t give away the abstraction capabilities of an OO language just because that language has a poor contract management.

The only way of dealing with it for the moment is to make your code depending on interfaces completely foolproof: expect exceptions to be thrown, expect parameters not to accept nulls, expect return values to be null or completely out of your domain… and try to mitigate the risk that the method execution order might not be respected (when you are implementing an interface).

The way forward would be to implement a contract language for interfaces (OCL anyone?)… as a matter of fact I kind of see a way of doing it in Java. I need to put more thought into it though, any suggestion welcome!

Sphere: Related Content

Oracle Application Server - SNAFU

Monday, February 4th, 2008

A long time ago, I had had to deploy a J2EE application on Oracle Application Server/OC4J… I can’t remember which version it was but it doesn’t really matter because the issues I had then still exist today: the bloody thing doesn’t work out of the box, deals with my class loading scheme like it deals with a bad ho’, and the administration tooling provision looks like it could have been built by our friend Neanderthal…

Server configuration

Now I might be a bit overreacting, but what interest is there in releasing an application server onto which you can barely deploy an application? I keep running into “java.lang.OutOfMemoryError: PermGen space”… at least these errors are pretty well known: they happen when the JVM can’t allocate enough space for objects that won’t be garbage collected.

They are also quite easy to circumvent by adding -XX:MaxPermSize=256m to your java command (default is 8m, and you can increase this number as much as you need to and your box - I found that 256m was usually the right figure).

To do that in Oracle Application Server you can go into the EnterpriseManager and in the Administration tab of your OC4J instance, choose “Server Properties” and add a new line with the -XX:MaxPermSize option… done!
If you have OC4J as a standalone instance, you can simply edit the bin/oc4j (or oc4j.cmd on Windows) file and add the -XX:MaxPermSize option directly after the java command.

But you would think that the Oracle guys would have read that kind of documentation (Tuning Garbage Collection with the 5.0 Java[tm] Virtual Machine) and implemented sensible parameters so that the server works as best as possible when you start it for the first time, wouldn’t you?

Class loading issues

I am dreaming that one day, some bloke will build a NNWAS (No-Nonsense Web Application Server), where you would be able to deploy you web application without having to worry about the interactions between the class loading tree of your application and that of the container you deploy it upon.

The issue I am talking about is better described in this excellent article (http://www.qos.ch/logging/classloader.jsp). In a few words, the issue arises when one of your libraries (let’s call it high-level library, because it is often a library of abstractions) is attempting to load classes dynamically from a dependent library (the low-level library, containing the implementations), but the JVM has loaded a different copy of your high-level library in a classloader that is a parent of your application’s classloader.

This is depicted in the following diagram: The Server’s High-level library gets loaded first and when it requests the loading of a class in the Low-level library, the class loading mechanism is looking up in the classloaders tree and can’t find it.

Class loading issues diagram

One solution is to add the low-level library in the shared libraries of the server (in effect, pushing it up the tree of classloaders) or removing the high-level library from the server’s installation (in effect, promoting the application’s equivalent library to the right level of classloading in order to reach the low-level library), with all the possible risks that this strategy might incur.

Bad tooling

Finally, I would like to automatically deploy my application on my OC4J container after each Maven build, after each change in the CVS and overnight…
To that effect, Oracle provides a set of Ant tasks to deploy/undeploy the webapp, publish the shared libraries, create DataSources and connection pools, and even restart the server.

However, I wonder why those tools have been developed as Ant tasks if you can’t use them within a build system: the build will fail if you attempt to undeploy a webapp without it being deployed first (same for shared libraries and connection pools), that means you can’t use the same script to install fresh as you would to repeat install!
That also means that if, for some reason (invalid web.xml, missing dependency…), your build breaks during the deployment phase, you can’t simply re-run you build script once the issue is fixed, you also have to put the server in a stable state again manually.
Nearly a deal breaker for continuous integration, but moreover, really annoying when you are in the ramp-up phase of a project!


Situation Normal: All Fu##ed Up

Sphere: Related Content

Fun with Java generics

Thursday, January 31st, 2008

Playing around with Spring beans and Hibernate, combined with inheritance of my domain and service classes (not an easy ride, I can tell you), I was trying to implement a generic service to cater for CRUD operations while, at the same time, keeping Spring’s autowiring facilities at bay…

So far, no luck. But in the process of implementing my generic class, I faced a problem that took me a while to circumvent. Consider the following classes:


  class A<T> {

    // reference to an instance of the B class
    public B b;

    public void doOp() {
      b.callOp(T.class);
    }
  }

  class B {

    /**
    * this method takes a class as parameter
    */
    public void callOp(Class c) {
      // …
    }
  }

The code that is problematic (as you might have guessed with the use of italics) is the following:


      b.callOp(T.class);

Well… it just doesn’t compile!
Apparently, there is no language feature to simply refer to the parameterized type class.

However, there’s still a solution:


(Class<T>) ((ParameterizedType) getClass().getGenericSuperclass()).getActualTypeArguments()[0];

As documented here: ParameterizedType.getActualTypeArguments()

Thought I would share the goodness…

Sphere: Related Content

I really care… do you?

Friday, September 28th, 2007

Do you care about software? Do you care about it being specified correctly, designed accordingly, developed qualitatively, maintained easily and that it gives the intended value to the business? Then take the pledge: icareaboutsoftware.org

Analysts, developers, database administrators, stakeholders… you are all welcome to raise the profile of this campaign!

via icareaboutsoftware.org - Be Part of the First 100

Sphere: Related Content

Cartoonesque ProgressBar

Friday, August 10th, 2007

I second Romain Guy when he says that Progress Bars are boring: let’s part with this idea that a progress bar has to be a simple long rectangle that is half full or half empty… I am sure that we can find better paradigms, best suited for each case:

  • copy of a bunch of files: have a decreasing stack of files on the left and an increasing one on the right
  • CD burning: have a CD on fire that becomes just a pile of ashes when finished

Here’s my weak attempt at creating a Cartoon-like Progress Bar (you will need Java installed to run that), but it probably took me more time to configure the JNLP than to actually create the animation…

webstart

By the way, I greatly recommend Romain’s book Filthy Rich Clients that I have added to my Safari bookshelf and that I think I should finish to read, if you consider my attempt at animating Swing… :)

Sphere: Related Content

Refactorbot v0.0.0.2: just use it!

Friday, July 27th, 2007

Following the term coined by Jason Gorman, here is a new version of the Refactorbot that you can use to test your models with:

download Zip file
(binaries + libraries + source + Eclipse workspace = 4Mb).

XMI loading capability

In this version, you can feed it an XMI file (I create mines with ArgoUML) and it will attempt to create a better model for it.

The XMI loading, however, is very - very - slow for big models… I coded it using XPath (Xalan-j precisely) and that proved so sluggish that I implemented a cache system: after the first creation of the in-memory model of your XMI file (be patient), it will create a .ser version of your XML file in the same directory and reuse it for next runs.

Because of the nature of the algorithm (using random refactorings), you may want to execute the program many times for the same model, and I can guarantee you that this cache will come quite handy!

New refactoring algorithm

I have implemented a new algorithm that changes only one class at each iteration: it will randomly select a class, randomly decide to create a new package and/or randomly choose the package to put the class in. It will then run the metrics again and keep or discard the new model based on the new metrics.

Don’t worry, the original algorithm that attempted to fathom a complete new model is still here. It is just that I thought it would be interesting to have different algorithms to play with.

Furthermore, I think that this second algorithm is closer to Jason’s initial view that the Refactorbot would do 1 random refactoring and then run all tests to prove that the system has been improved…

Using it

For you lucky windows users with JSE 1.5+ already installed, there’s a batch file in the archive that let’s you just run the application; just run:

refactorbot.bat myxmifile.xmi 1000 p

The others will have to install a version of Java greater or equal to 1.5 and launch the refactorbot.improvemetrics.ImproveMetrics class. The required libraries are provided in the lib folder.

The output is still very crude because it will only tell you the list of packages it has generated and the the classes they contain. I should produce an XMI output very soon, but that’ll wait until I learn a bit more about XMI!

Your impressions

My own results are quite encouraging: I have tried the Refactorbot with a sizeable model (177 classes in 25 packages), and although the first loading of the CSV file is slow (it has 625 associations in a 20Mb file, and that’s what takes most of the time), the improvement of the model is quite fast! Granted, it is quite easy to improve on that model (that I reverse-engineered from a project’s codebase with ArgoUML), but the insight I got was still very invaluable!

However, this is probably the first usable version of the Refactorbot, so I would like to hear from your own experience with the automatic refactoring of your own models! Send me an email at contact@<my domain here>.com, that’ll help improving on the program…

Oh and by the way, I care about software!

Sphere: Related Content

Automated Design Improvement

Friday, July 20th, 2007

Jason Gorman, again, inpired me for a new blog post. Some time ago, he offered an OO design challenge in which a design’s maintainability had to be improved upon without any knowledge of the underlying business case.
I quickly gathered that you could solely rely on metrics for determining quality level for a design and did a few trials myself (see below) to improve on the metrics[1].
Jason posted his own solution yesterday, and I suddenly remembered one of his earlier posts that suggested we should be able to automate this process. I detail such a solution further in this article and I give one possible (better?) solution to the design challenge.

Trying to find a methodology

My first attempt at making a better model was to try and see the patterns “through” the model. I moved a few classes around in ArgoUML, generated the Java classes and ran the Metrics plugin for Eclipse… alas, although the normalized distance from main sequence[1] was quite good (D=0.22), the relational cohesion (R=0.55) was worse than the reference model’s (R=0.83)!

First attempt at design challenge

In order to be able to improve models metrics with consistency, I had to devise a methodology.
I ordered the classes by their distance in the dependency graph: the more dependable, the better for maintainability. The dependency arcs are as follows:

B -> A -> C -> G
B -> A -> C
B -> A
B -> D
E -> D
E -> F
H -> D
H -> F

This prompted me to put the classes in four different packages like this:
Second attempt at design challenge
Not very different from my model created without applied methodology, but it has a great flaw: it has a cycle between p2 and p3! (and awful metrics too: D=0.40 and R=0.66)
Moving class A back to package p2 does break the cycle and improve the normalized distance, though only slightly (D=0.375).

Automating the search for a solution

At that point, I went to bed and left it running as a background thought until Jason posted his own solution to the challenge… the way he was proceeding with the refactorings reminded me of one of his earlier posts, though I can’t seem to be able to find it any more, that suggested we might be able to build a robot to perform random refactorings that would be kept if they improved the overall design of a system… if I couldn’t devise a method for solving this problem, I had better leave it to chance completely!

So I built a simple version of such a lucky robot with a very simple design, that would just pick classes from the reference model and, for each of them, decide randomly if it should create a new package or choose, still randomly, a package to put it in…
Once the model is fully built, it runs a few metrics and compare them to the reference model and, if it shows an improvement, keeps the generated model as reference model (otherwise discards it) and does another cycle.
And here is what it produced, after a few thousand cycles:

Third attempt at design challenge

It is definitely much more complex than any other model I could have come up with by hand, but it translates into nice metrics: D=0.1875 and R=1.0!

This leads me to believe that with a bit of work and time we could come up with a more advanced solution that would help designers get the best out of their design without risking to break everything…

You can download the rather crude source code if you wish to have a look at the application.

[1] see http://aopmetrics.tigris.org/metrics.html for a good explanation of a few software metrics metrics

Sphere: Related Content

Bug in the W3C HTML Validator?

Monday, July 16th, 2007

It seems that I have found a quirk in the W3C Markup Validation Service

When I validate this page (template of the Who’s Who in France website, without any content), the Validation Service passes (see the validation result).

When I validate this page (same page with home page content) it raises an error about an unclosed </div> (see the validation result)

But I can’t seem to be able to find the error! I have even tried to validate the document in an Eclipse plugin and it does indeed validate; I’m going bonkers here!

Am I that daft, or could it be that I found a bug of some kind in the HTML validation service?

Comments welcome!

Sphere: Related Content