Archive for the ‘Web/Tech’ Category

The road to hell is paved with loose typing

Wednesday, June 13th, 2007

I am now (I just took a break to type this note) assessing the quality of a Java codebase using many methods, including actually trying to understand how everything is tied together. Well, it would be an euphemism to say it is convoluted .

The developers chose to have most methods accept as parameters (or return) objects, arrays of objects, 2 dimensions arrays of objects or a whole range of unspecified collections (Hashmaps, Treemaps, ArrayLists and Vectors without any consideration about their actual intended use), and they definitely didn’t use Java 5 generics!

Moreover, they can return collections containing collections of 1 element, arrays of objects where only the value at a specific index is populated or used, arrays of objects containing very different kinds of types (a String at index 0, an Object[] at index 1, null at index 2 and a String or Integer at index 3, depending on the size of the array at index 1…).

Worst of all, the Javadoc associated with the methods is either non-existent, wrong (cut and pasted from an other method), useless (the description is just the name of the parameter split into words) or completely unintelligible.

It is impossible that this code has ever been reviewed by any competent Java developer. I think the assessment I am running is actually turning into a code review.

Now I must find a way to quickly improve code quality and communicate it efficiently to the new developers team… any idea?

Sphere: Related Content

Object Oriented old rope

Tuesday, June 5th, 2007

I found a very good refresher article on Object-Oriented Design at DDJ that I definitely would recommend reading: Software Complexity: Bringing Order to Chaos.

It explains how we need to divide a complex system into much smaller systems (divide and rule) in order to be able to cope with the complexity, and that it is much simpler to extend a system when you have confidence in its sub-components (automated testing anyone?).

Don’t miss the sidebar for a crash-course on different design methods (I suddenly realise why I studied SADT, Merise and OO)

Oh, and by the way, you might also learn that we all have short attention spans (limited to 7 subjects)… so no boasting allowed anymore! :)

Sphere: Related Content

Quality can come at no price

Friday, May 25th, 2007

“Quality, Features, Time, pick two” is something you hear many times in teams on development projects. I am more a proponent of discussing only Time and Features with clients; but that’s only because I believe Quality can be achieved with a very low effort, following a few simple principles.

You might find that they sound like common sense sometimes but, by (recent) experience, I know there are many places where there is room for improvement!

Test, test, test and test!

OK, we’ve heard it many times from the Agile crowd, but however you do, test has to be done. Not one time, not two times, but all along the development. This is very documented and it really makes a difference:

  • screen usability tests[1]: even if you use just the hallway usability test, it will help making sure you are not going in the completely wrong direction with the UI; 80% of usability blunders can be picked-up in a day’s worth of usability testing and fixed on the fly.
  • automated code tests (unit tests and business cases)[2]: you really have to convince your developers that they can’t release code for which they don’t have automated tests. If it can be tested, it is better designed; if you have a test for a bug, the bug will never appear again (or at least you’ll know it).
  • user acceptance tests[3]: the customers have to be educated to come up with testable requirements; they will do the testing themselves!
  • load tests: if performance is likely to be an issue, test at least at the worst-case expected load level. It is better if the tests are automated.

All in all, it seems like a lot to do, but quick usability tests will avoid complete redesigns (1 day), automated code tests will keep you going free of regression bugs (cost of developing the tests balanced by the gains on regression bugs), UATs will make sure “everyone” understands the requirements (costs included in the requirements specification) and load tests uncover nasty surprises early on (a few days at worst, but gains on redesign at best)!

Choose good working environments

This looks like a pretty standard thought at first but if you try and assess what software people use in your organisation, you will certainly find out that many use programs that don’t fit their needs. And when I say needs, I mean features as well as usability. People will go to unbelievable lengths to have their program (most probably installed by default) do what they want them to.

For instance this doesn’t look good when your testers save emails and paste screen shots in Word documents in order to keep a track of bugs and tests. Have them use a bloody testing database and bug tracker!

Now imagine what you are going to impose to your developers when you chose a development platform. You have to have a pretty damn good reason to choose to have your logic coded at the database level (think PL/SQL) when you know for sure that it is probably the worst development environment possible (except maybe for APL :).

Tooling matters a great deal. It can make the difference between solving a bug in half a day and spending one week with fear gripping your bowels.

Choose your project’s tooling according to your staff skills and not according to some dubious trend. And if that involves implementing a good skills management strategy in your organisation, well, isn’t that what you should be doing anyway?

Check upon your experts

You can be confident about the quality of your software when you are confident in the experts. Those are the one you take your advice from and those you entrust with the development of your system.

Advice can be checked quite easily. You probably can get a good idea of an issue with a bit of googling… but you certainly can find someone in your network who has already gone through the same questions.

Work can be checked too. But it requires a bit more tact. However, if your DBA (OK I have something against databases today, but the point could be made with any one professional… think plumbers) consistently spends more time to do things properly than your good sense is telling you that he should, you might want to know if you can rely on him!

Do those checks regularly. Peer reviewing is probably the approach that yields the most effective results in terms of actually getting the answer you’re looking for, but it implies you have peers at hand that can be diverted from their actual jobs once and again. Also, your team might feel, rightly, that they are under scrutiny… and they might not like it.

The best way to do your checks is to have each member of your team explain what they are doing to the other members of the team. Have them show their code, their database configuration, their UI… peer review will occur naturally. And better, knowledge will trickle down through your team!

Surround yourself with good people

It’s a corollary to the previous one but also a summary for all of this article. If the people you are working with are dedicated to do their best at coding, optimising their environment, not talking BS, testing and developing their knowledge, there is a great chance they will produce good quality software!

Now all you have to do is actually build the system your customer wants!

[1]Register to Jakob Nielsen’s Alertbox and read one or two of his books for a first approach. Most web usability principles can be adapted with good results to rich client UIs.
[2]See JUnit or TestNG for Java, NUnit for .Net, but these principles can be adapted (and indeed found online) for any language.
[3]As a starter, read this article for customer-driven development and testing

Sphere: Related Content

Unchecked warnings and type erasure with Java Generics

Wednesday, March 14th, 2007

I am currently cleaning a whole code base from all its warnings and I kept stumbling upon a few warnings related to the use of generics. And all I can find on Google is people telling each other to use @SuppressWarnings(”unchecked”)…

For a start, I am not a big fan of annotation (I will probably post some day on that), but this is more like sidestepping the problem and acquiring bad practices for dealing with warnings.

2 different warnings

Consider the following example encountered while using Apache Log4J:

Enumeration<Appender> e = log.getAllAppenders();

This will generate the following warning: “Type safety: The expression of type Enumeration needs unchecked conversion to conform to Enumeration”

OK, fine! so let’s cast it to the proper type:

Enumeration<Appender> e = (Enumeration<Appender>)log.getAllAppenders();

Well, the result is not quite what we expected in that it’s still generating a warning: “Type safety: The cast from Enumeration to Enumeration is actually checking against the erased type Enumeration”.

Even though we are pretty sure this should never generate any ClassCastException, these warnings are just plain annoying for code quality…

More on erased types

The implementation of Generics in Java 1.5 came with a nice advantage: you could code in fancy spanking-new 1.5 generics and still generate classes that were compatible with previous versions of Java!

They did that by enforcing the type safety with generics only at compilation time, but otherwise generate code compatible with Java 1.2+ by removing any parameterised type from the compiled class.

In effect, using generics it is now easier to code type-safe classes, but the checking is not done at runtime. If you find a way to feed your application with instances that do not comply with the intended generics behaviour (in our example, having an Enumeration of, say, String instead of Appender), this will certainly result in a nice ClassCastException!

And that’s the way it is implemented in Java, full stop! No discussion possible (until they decide to cut with previous versions of Java). So why have a warning at all?

The solution

Actually, there is a very simple answer to this (apart from the fact that future versions might not provide backward compatibility)…

It is obvious we are trying to get an Enumeration of Appender instances in order to apply some process on each of them; …but wait!

Appender is an interface… (that’s what got me on the tracks for the solution!)

What you are actually getting is an Enumeration on implementations of Appender; that is, any implementation possible!!!

So really, the code logically needs to be written as follows:

Enumeration<? extends Appender> e = (Enumeration<? extends Appender>)log.getAllAppenders();

That’s right! we want to be able to deal generically with any subtype of Appender… seems soooo easy now with hindsight, isn’t it?

Sphere: Related Content

mod_rewrite and url parameters

Monday, March 12th, 2007

Apache’s mod_rewrite module is very potent for manipulating urls on the server-side; it allows you to:

  • hide the technology used to dynamically generate the pages
  • hide the construction of the URL, rendering it more difficult to modify some parameters in order to get the application in an non-desired state, thus limiting the risk of an exploit to be discovered by a hacker or a script-kiddie
  • improve human legibility of URLs
  • improve search engines crawl and indexation

The problem, though, is that you have to understand what is happenning under the hood to be able to produce effective rewriting that will not be worse than the original URL scheme!
A problem I have been looking to resolve for quite some time, is how to pass on the parameters from the exposed URL to the rewritten one…

For instance, if I have the URL myserver/book_15645854.html, I can simply rewrite it to call the myserver/book_details.php?bookid=15645854 url by using the following rule with a regular expression:

RewriteRule ^book_([0-9]+)\.html$ book_details.php?bookid=$1

But now, what if I want to rewrite the following URL


so that it populates the source email field by default with the email passed as a parameter to the dynamic contact page like myserver/contact.php? ?

Bad news is that you can’t use a regular expression in the RewriteRule. It just won’t work! And you can’t seem to find anything in the documentation that tells you why…

Good news is, it’s actually quite simple but, coming back to what I said about looking under the hood, you have to understand that the rule you define in the RewriteRule clause is working on the resource part of your URL and not on the overall request URL with all it’s parameter!

The actual parameters are contained in a variable called QUERY_STRING that you can use as follows:

RewriteRule ^contact_form\.html$ contact.php?%{QUERY_STRING}

…now I will try to find how to collect only one of the parameters - ideas welcome!

Finally, you might to have a look at this more complete article: Mod_rewrite Introduction and Cheat Sheet, it has everything you want to forget about mod_rewrite!

Sphere: Related Content

SCCS Documentation anyone?

Tuesday, January 16th, 2007

For those of you who wonder where you can find a decent documentation, I would recommend either

For the story, noone seems to be using SCCS anyway… please drop me a line if you do, because when you type SCCS Documentation in google , the 3rd result is some guy asking for documentation… back in 1995!!!

Sphere: Related Content

IE soon to be dead?

Wednesday, January 3rd, 2007

This is the third post of a series analysing the trends in browsers uses on a website I am managing (previous posts were Is Firefox losing ground? and Standards compliant browsers neck and neck ).

I couldn’t post over the end of 2006 while I spent of the festive period back in France (which by the way reminds me of wishing you a happy new year), but here we are, back again with a lot of good resolutions.

I had a look this morning at the statistics of the Who’s Who in France browser access, and I could have shown yet another graph of Internet Explorer 7 taking ground over other versions of I… but it would have been a repeat of last post’s graph showing that IE7 use (nearly 17%) just has gone over Firefox (just 12%) and other browers (about 12%).

Instead I chose to draw a graph of the use of Internet Explorer (any version) over the past two years:

Internet Explorer statistics over 2005 and 2006

Internet Explorer has lost about 14% market share over the course of 2005 and 2006; and it looks like a pretty steady curve. If the trend is continued and Internet Explorer 7 doesn’t catch up (it doesn’t look like Vista is going to catch up soon neither), we could see something like Internet Explorer being taken over by another browser (Firefox 3?) in the next 2 to 3 years period.

It looks like people are starting to get used to something else than Microsoft’s browsing software… anyone in the run to create a brand new browser?

Sphere: Related Content

Standards-compliant browsers, neck and neck

Friday, December 8th, 2006

This article is a follow-up of the article Is Firefox losing ground?.

It seems that in the past eight days IE7 has grown to get approximatively the same market share as Firefox…
Browser stats for 07/06 to 12/06 showing IE7, FF and others being neck-and-neck

although this trend tends to have stagnated in the past week:

IE7 stats for 30/11 to 08/12 2006

The coming weeks are going to be really interesting!

Sphere: Related Content

Is Firefox losing ground?

Friday, December 1st, 2006

I am currently running a small experiment on the “spreading” of Firefox.

I am managing a public website (Who’s Who In France), mostly for fun and because I like the people behind this business, and we recently completely reworked the design so that it matches more closely to web standards; I am very close to the holy grail of XHTML validation, still have to figure out why when I am fixing the following validation errors, this valid page breaks…

Anyway, I have been looking at the logs in the past week and decided to draw a graph of the different shares of browsers access on the website, especially because I wanted to know how IE7 was doing.

The following graph is the result of my investigation of the statistics in the past 2 years (Internet Explorer in versions preceding IE7 have been omitted for improved legibility of the graph):

Browser statistics 2005-2006

2 interesting facts that I can see on this graph:

  • Internet Explorer 7 is gaining ground on previous versions; that was predictable as it is downloaded as part of Windows Update… myself included on the company laptop (for those who are interested, you can install previous versions of Internet Explorer for compatibility testing)
  • In 2006, other browsers have taken the high ground on Firefox; not everyone is happy with Firefox and the market for alternative browsers is very wide.

This makes me think that this experiment is far from being finished. I believe IE7 will ultimately gain ground on the alternative browsers. Not that I would like to, but I tested it and couldn’t really find anything that I disliked (so far).

Follow-ups in the coming weeks…

Sphere: Related Content

The Internet is so damn complex

Tuesday, November 21st, 2006

Today, a colleague and I were discussing the use we have of Internet. While I am personnaly mainly both an information gatherer and a solutions provider, my colleague had mainly a social use of the Internet.

Both of us use email and search services (namely Google) extensively, but that’s approximatively all we have in common. He is used to publishing photos and videos on flickr and youTube respectively, post blog entries on Blogger and manage a MySpace page.

And that’s where the difference is: not being a great social animal, I feel overloaded by the complexity of having to manage the bits and pieces of my self on the Internet through a lot of different services.

I find it so difficult to believe that it’s how people want their information to exist: scatered all over the place on the Internet, with weak or inexistant possibilities to syndicate all content and apply changes on a general scale (like changing your name on all your profiles when you get married).

Moreover, if your service provider goes bust, or if for any reason you lose access to your account, you have to recreate all your content again, or it is lost forever. Furthermore, you would want to have every page linking to you (or your info) to be updated to reflect the changes… and what we certainly don’t want is to have a single mammoth to manage everything we would want to do, do we?

We might be onto something here…

…semantic web? Probably doesn’t solve anything for 99% of the content on the Internet
…Web 3.0? Whatever that might be.
…peer-to-peer networks? It is not a new idea, for sure, but it is an idea that still have a lot of potential for development. (Freenet maybe?)

Sphere: Related Content