Archive for the ‘Agility’ Category

Software Craftmanship Conference 2009 report

Wednesday, March 4th, 2009

The SC2009 was one of these conferences where you would leave with more questions than answers. And it feels good! All in all, I felt humbled by the quality of the sessions as much as that of the participants. Evidently, everyone there was very pleased to be present and you could feel it in the bustling about and around!

The conference was hosted and catered in the BBC Worldwide buildings and as an architecture nerd, I have to say I loved the building and the interior decoration; it must be a fantastic place to work in, but it is hardly the point of this post, I simply digress.

First, you might want to head to Twitter / #sc2009 to have a look at how participants and speakers lived the event in real time.

Also, a broader coverage of all the blogs and reports for the conference will certainly pop-up on Jason Gorman’s blog (some day, when he will have got round to doing it). And more specifically, you’ll find a vox populis video of participants, including that of a guy who’s just realised he took a terrible accent in front of the camera ;-)

The complete program of the conference is available on the conference’s website and, gathering from participants to other session, the content was pretty interesting too; I wonder what kind of sessions were rejected by the selection committee.

Programming in the small

Some time ago, I had the opportunity to have a conversation around a pint with Ivan Moore and Mike Hill, where we were debating and mostly agreeing on how code should be formatted. Before that conversation, I felt like an alien insisting on a 400 characters line length in eclipse in place of the 80 characters that is the general, wrong, consensus.

This session was drawing from their own experience on formatting, writing and refactoring code, and how to challenge accepted “best practices” of code writing. It was very fun to refactor the bad code examples they provided!

Specification Workshops

This session was subtitled “The missing Link for ATDD & Example-Driven Development” and showed how you would come round to actually specify acceptance test cases using examples brainstorming: gather a lot of people that have knowledge in the business rules you want to specify and have them give examples of the rules. Discuss.

Definitely something I would try, but I’ll probably read Gojko Adzic’s book before to gain more insight on how to do it and not confuse it with iteration planning…

Responsibility-driven Design with Mock Objects

This session, however fun, didn’t quite cut it for me. First the facilitators started by “pitying” people who don’t do TDD; that could be a whole separate post in itself, but that kind of statement kinda makes me boil inside… and I am not even talking about all the Java/.Net snickering!

Plus, we did only scratch the surface of the purpose of the session, that is, learning about responsibility-driven design; we mocked objects alright, but we spent most of the time refactoring the code we had written…

However, I learned a lot from this session: they used Ruby as the supporting language for the demonstration… and I have to say I was rather impressed with a few things they did with it! Should I really, really, learn Ruby though?

Empirical Experiences of Refactoring in Open Source

In this session, Steve Counsel explained how he has analysed a few open-source projects to see how refactorings were applied throughout a project lifecycle. Although very interesting it was somewhat predictable: the most complex refactorings (e.g. remove duplication, extract superclass…) are less implemented than the simpler ones (e.g. rename variable, rename method…) by a factor of 10!

Thing is, even though this research stroke a chord with me as I am very keen on refactorings (even automated refactoring), it seemed like few conclusions were actually drawn from it; Steve did not tell us whether he had controlled for feature additions or development team turnover… furthermore, because the case studied were open source projects, I am not sure how much it would apply to corporate systems or commercial products. I guess I will have to find out!

Test-Driven Development of Asynchronous Systems

This session was possibly the most underwhelming one of the day; partly because it was the last one, mostly because it didn’t stimulate as much thoughts as the other ones.

Granted, Nat Pryce has implemented a very clever way of testing asynchronous processes during the integration tests (basically, having process probes that can tell whether the system has reached a stable position before executing test assertions)! I think if I ever face this issue, I would be very likely to use the same approach; or purchase a product that would do the trick! Wink-wink… :)

The take-home lesson learned

As I said before, I met a lot of very interesting people who care about software as much as I do, and the overall experience made me buzzing with thoughts.

But the one thing that I got out of it and that I will definitely be even more stringent about, from now on, is: refactor, refactor, refactor!

Sphere: Related Content

Get your place to a FREE software development conference.

Tuesday, December 2nd, 2008

OK, it’s probably a bit pushy to highlight the fact that the Software Craftmanship 2009 conference is free, but it’s the best I can do in terms of attention-grabbing headlines.

Now that you have actually started to read, I can go on and say that if you’re developer worth any of their salt and you can make it to London on 26th February, this is THE conference to attend to. You’ll get a chance to improve your craft by learning from the masters (as the conference is chaired by Jason Gorman, you can be pretty sure the talks will be very exciting). Plus, my guess is you will even be able to show off your skills if you want to!

What more, you can even submit your own session proposal… so go on, make yourself heard! Or simply register, there won’t be enough seats for everyone…

Sphere: Related Content

Agile? Of course we are agile!

Friday, May 23rd, 2008

It seems that all companies these days are all doing “agile”. Here’s a pot-pourri collected recently from a few interviews and personnal experiences…

Of course we are agile…

  • we are doing pair programming; that is, as long as each of the programmers work on their own computer… you know we wouldn’t want you to look like you’re doing nothing
  • we are doing stand-up meetings… around the coffee machine… before the real meetings!
  • we write user stories; well, we don’t really have the time to write them, so we’ll just use the title in a big MS Project document and talk to you about what you should be doing
  • we use dynamic languages: Javascript, Flash…
  • look, we must be agile, we are even using open source libraries; as a matter of fact, we use as many open source libraries as possible to show that we are willing to leverage as much existing code as possible - we even have our own architect who tells you which library you will use!
  • Sometimes, I wish they would proudly say “Agile? Nay, we do waterfall here!“, at least it would be honest… :)

    Sphere: Related Content

I really care… do you?

Friday, September 28th, 2007

Do you care about software? Do you care about it being specified correctly, designed accordingly, developed qualitatively, maintained easily and that it gives the intended value to the business? Then take the pledge: icareaboutsoftware.org

Analysts, developers, database administrators, stakeholders… you are all welcome to raise the profile of this campaign!

via icareaboutsoftware.org - Be Part of the First 100

Sphere: Related Content

Refactorbot v0.0.0.2: just use it!

Friday, July 27th, 2007

Following the term coined by Jason Gorman, here is a new version of the Refactorbot that you can use to test your models with:

download Zip file
(binaries + libraries + source + Eclipse workspace = 4Mb).

XMI loading capability

In this version, you can feed it an XMI file (I create mines with ArgoUML) and it will attempt to create a better model for it.

The XMI loading, however, is very - very - slow for big models… I coded it using XPath (Xalan-j precisely) and that proved so sluggish that I implemented a cache system: after the first creation of the in-memory model of your XMI file (be patient), it will create a .ser version of your XML file in the same directory and reuse it for next runs.

Because of the nature of the algorithm (using random refactorings), you may want to execute the program many times for the same model, and I can guarantee you that this cache will come quite handy!

New refactoring algorithm

I have implemented a new algorithm that changes only one class at each iteration: it will randomly select a class, randomly decide to create a new package and/or randomly choose the package to put the class in. It will then run the metrics again and keep or discard the new model based on the new metrics.

Don’t worry, the original algorithm that attempted to fathom a complete new model is still here. It is just that I thought it would be interesting to have different algorithms to play with.

Furthermore, I think that this second algorithm is closer to Jason’s initial view that the Refactorbot would do 1 random refactoring and then run all tests to prove that the system has been improved…

Using it

For you lucky windows users with JSE 1.5+ already installed, there’s a batch file in the archive that let’s you just run the application; just run:

refactorbot.bat myxmifile.xmi 1000 p

The others will have to install a version of Java greater or equal to 1.5 and launch the refactorbot.improvemetrics.ImproveMetrics class. The required libraries are provided in the lib folder.

The output is still very crude because it will only tell you the list of packages it has generated and the the classes they contain. I should produce an XMI output very soon, but that’ll wait until I learn a bit more about XMI!

Your impressions

My own results are quite encouraging: I have tried the Refactorbot with a sizeable model (177 classes in 25 packages), and although the first loading of the CSV file is slow (it has 625 associations in a 20Mb file, and that’s what takes most of the time), the improvement of the model is quite fast! Granted, it is quite easy to improve on that model (that I reverse-engineered from a project’s codebase with ArgoUML), but the insight I got was still very invaluable!

However, this is probably the first usable version of the Refactorbot, so I would like to hear from your own experience with the automatic refactoring of your own models! Send me an email at contact@<my domain here>.com, that’ll help improving on the program…

Oh and by the way, I care about software!

Sphere: Related Content

Automated Design Improvement

Friday, July 20th, 2007

Jason Gorman, again, inpired me for a new blog post. Some time ago, he offered an OO design challenge in which a design’s maintainability had to be improved upon without any knowledge of the underlying business case.
I quickly gathered that you could solely rely on metrics for determining quality level for a design and did a few trials myself (see below) to improve on the metrics[1].
Jason posted his own solution yesterday, and I suddenly remembered one of his earlier posts that suggested we should be able to automate this process. I detail such a solution further in this article and I give one possible (better?) solution to the design challenge.

Trying to find a methodology

My first attempt at making a better model was to try and see the patterns “through” the model. I moved a few classes around in ArgoUML, generated the Java classes and ran the Metrics plugin for Eclipse… alas, although the normalized distance from main sequence[1] was quite good (D=0.22), the relational cohesion (R=0.55) was worse than the reference model’s (R=0.83)!

First attempt at design challenge

In order to be able to improve models metrics with consistency, I had to devise a methodology.
I ordered the classes by their distance in the dependency graph: the more dependable, the better for maintainability. The dependency arcs are as follows:

B -> A -> C -> G
B -> A -> C
B -> A
B -> D
E -> D
E -> F
H -> D
H -> F

This prompted me to put the classes in four different packages like this:
Second attempt at design challenge
Not very different from my model created without applied methodology, but it has a great flaw: it has a cycle between p2 and p3! (and awful metrics too: D=0.40 and R=0.66)
Moving class A back to package p2 does break the cycle and improve the normalized distance, though only slightly (D=0.375).

Automating the search for a solution

At that point, I went to bed and left it running as a background thought until Jason posted his own solution to the challenge… the way he was proceeding with the refactorings reminded me of one of his earlier posts, though I can’t seem to be able to find it any more, that suggested we might be able to build a robot to perform random refactorings that would be kept if they improved the overall design of a system… if I couldn’t devise a method for solving this problem, I had better leave it to chance completely!

So I built a simple version of such a lucky robot with a very simple design, that would just pick classes from the reference model and, for each of them, decide randomly if it should create a new package or choose, still randomly, a package to put it in…
Once the model is fully built, it runs a few metrics and compare them to the reference model and, if it shows an improvement, keeps the generated model as reference model (otherwise discards it) and does another cycle.
And here is what it produced, after a few thousand cycles:

Third attempt at design challenge

It is definitely much more complex than any other model I could have come up with by hand, but it translates into nice metrics: D=0.1875 and R=1.0!

This leads me to believe that with a bit of work and time we could come up with a more advanced solution that would help designers get the best out of their design without risking to break everything…

You can download the rather crude source code if you wish to have a look at the application.

[1] see http://aopmetrics.tigris.org/metrics.html for a good explanation of a few software metrics metrics

Sphere: Related Content

Quality can come at no price

Friday, May 25th, 2007

“Quality, Features, Time, pick two” is something you hear many times in teams on development projects. I am more a proponent of discussing only Time and Features with clients; but that’s only because I believe Quality can be achieved with a very low effort, following a few simple principles.

You might find that they sound like common sense sometimes but, by (recent) experience, I know there are many places where there is room for improvement!

Test, test, test and test!

OK, we’ve heard it many times from the Agile crowd, but however you do, test has to be done. Not one time, not two times, but all along the development. This is very documented and it really makes a difference:

  • screen usability tests[1]: even if you use just the hallway usability test, it will help making sure you are not going in the completely wrong direction with the UI; 80% of usability blunders can be picked-up in a day’s worth of usability testing and fixed on the fly.
  • automated code tests (unit tests and business cases)[2]: you really have to convince your developers that they can’t release code for which they don’t have automated tests. If it can be tested, it is better designed; if you have a test for a bug, the bug will never appear again (or at least you’ll know it).
  • user acceptance tests[3]: the customers have to be educated to come up with testable requirements; they will do the testing themselves!
  • load tests: if performance is likely to be an issue, test at least at the worst-case expected load level. It is better if the tests are automated.

All in all, it seems like a lot to do, but quick usability tests will avoid complete redesigns (1 day), automated code tests will keep you going free of regression bugs (cost of developing the tests balanced by the gains on regression bugs), UATs will make sure “everyone” understands the requirements (costs included in the requirements specification) and load tests uncover nasty surprises early on (a few days at worst, but gains on redesign at best)!

Choose good working environments

This looks like a pretty standard thought at first but if you try and assess what software people use in your organisation, you will certainly find out that many use programs that don’t fit their needs. And when I say needs, I mean features as well as usability. People will go to unbelievable lengths to have their program (most probably installed by default) do what they want them to.

For instance this doesn’t look good when your testers save emails and paste screen shots in Word documents in order to keep a track of bugs and tests. Have them use a bloody testing database and bug tracker!

Now imagine what you are going to impose to your developers when you chose a development platform. You have to have a pretty damn good reason to choose to have your logic coded at the database level (think PL/SQL) when you know for sure that it is probably the worst development environment possible (except maybe for APL :).

Tooling matters a great deal. It can make the difference between solving a bug in half a day and spending one week with fear gripping your bowels.

Choose your project’s tooling according to your staff skills and not according to some dubious trend. And if that involves implementing a good skills management strategy in your organisation, well, isn’t that what you should be doing anyway?

Check upon your experts

You can be confident about the quality of your software when you are confident in the experts. Those are the one you take your advice from and those you entrust with the development of your system.

Advice can be checked quite easily. You probably can get a good idea of an issue with a bit of googling… but you certainly can find someone in your network who has already gone through the same questions.

Work can be checked too. But it requires a bit more tact. However, if your DBA (OK I have something against databases today, but the point could be made with any one professional… think plumbers) consistently spends more time to do things properly than your good sense is telling you that he should, you might want to know if you can rely on him!

Do those checks regularly. Peer reviewing is probably the approach that yields the most effective results in terms of actually getting the answer you’re looking for, but it implies you have peers at hand that can be diverted from their actual jobs once and again. Also, your team might feel, rightly, that they are under scrutiny… and they might not like it.

The best way to do your checks is to have each member of your team explain what they are doing to the other members of the team. Have them show their code, their database configuration, their UI… peer review will occur naturally. And better, knowledge will trickle down through your team!

Surround yourself with good people

It’s a corollary to the previous one but also a summary for all of this article. If the people you are working with are dedicated to do their best at coding, optimising their environment, not talking BS, testing and developing their knowledge, there is a great chance they will produce good quality software!

Now all you have to do is actually build the system your customer wants!

[1]Register to Jakob Nielsen’s Alertbox and read one or two of his books for a first approach. Most web usability principles can be adapted with good results to rich client UIs.
[2]See JUnit or TestNG for Java, NUnit for .Net, but these principles can be adapted (and indeed found online) for any language.
[3]As a starter, read this article for customer-driven development and testing

Sphere: Related Content

EssUp: Ivar Jacobson wants to get at your wallet…

Thursday, May 17th, 2007

I just listened today to the episode #2 (MP3, 16mns) of the excellent podcast from parlezuml.com’s Jason Gorman, and I couldn’t resist the temptation to criticise Jacobson’s markeking approach to introducing EssUp (for Essential Unified process).

As much as I respect Jacobson (after all, he had been a near-god to me a few years back), especially for giving us Use Cases and for his tremendous work on the Unified Process, I still believe there’s something fishy with EssUp.

Best of the best practices

The intent of EssUp is to merge 3 major streams of process engineering: CMMi, RUP and Agile. From what you can hear in the podcast, Jacobson suggests that this new process takes the best from all three processes; from CMMi its focus on process improvement throughout the organisation, from RUP the clear separation of roles and activities in the project and the UML-base artifacts (think Use Cases), and from “Agile” its lean and mean attitude (well, really, it would take whatever would make the customer satisfied).

According to Jacobson, the need of a new process emerged from the realisation that UP (Unified Process) and its most known implementation RUP (Rational UP, from IBM) were too heavy to be applied fully and way too complex to be adapted… oh, and may I add that they weren’t too pleased that they didn’t contain the word “agile”?

Aggregation of practices

Every element of each of the aforementioned development processes is included or generalised in the EssUp process in what they call practices. They first draw an action plan tailored for your organisation, and they then choose which practices are best suited to your project and they make them work for you. I suspect that they would mostly respond to what you want to hear in terms of project organisation, but maybe that’s just me.

Clearly, what Jacobson and his team of highly competent consultants are selling is consultancy on every detail of a Software Development Process (SDP); no discussion about the fact that they are highly competent, I mean, if they all are of the same calibre as Jacobson, this must be a hell of party!

All in all, you know who they are and you know that they are knowledgeable about SDPs, and what you buy is their skills at setting up a working up something for you. However is this process mix-up like saying “We have got absolutely nothing, but we’ll make you pay us for helping anyway.”?

Groundbreaking?

I used to work with RUP on a project in the defence sector, back in 2001; at the time, I was much younger (indeed) and UP appeared to me as the best solution to SPD issues. It promised to take a hit at “the mythical man-month” by offering a set of roles, activities and artifacts that, if used cleverly, would guarantee a better outcome to your project. And to support that, they could show countless examples of how it worked for others… and I have to say, it actually worked quite well for us, although I would like to find a way to measure how much this was due to us using RUP?

- Now when I am looking at EssUp’s work packages (and their subsequent details) which are presented on Jacobson’s website from a high-level point of view, I can’t help remembering the probably overpriced RUP application that we used to try and follow good practices on the project. -

What I think is lacking in Essup, as well as in any other process, is actual research and evidence to demonstrate that using one practice is going to improve dramatically the chances of good outcome of your project. Moreover, there is no supporting analysis of how well practices perform when they are combined to each other; maybe two practices that independently improve a project’s viability would jeopardise it if combined? - I am not aiming at XP people that tell you that you have to use all XP for it to work, but can’t really tell what is the impact of dropping one of its practices -

Granted, there is no silver bullet (yet?) to software development process and having people like Jacobson foster new processes is certainly a very good thing; but you can bet that Essup is no silver bullet neither… so why present it as if it was one? And why not be honest about the fact that they actually want to sell good consultancy on highly engineered processes?

Sphere: Related Content

The agile enterprise at its best

Wednesday, April 4th, 2007

I am just coming back from Rome where I spent the past week walking through town as a tourist. No doubt the city is wonderful and its people are very kind, but that’s not the subject…

Should you walk in the streets of Rome, you would be amazed by the number of illicit street vendors that offer every possible brand of fake bags (even some with the logo that didn’t match the copy), fake watches (how many 10€ Rolex have I seen?), fake silver bracelets and other piles of rubbish that every young visitor is keen to bargain for!

An event happened that made me think: when it rains, their business changes; they all suddenly sell umbrellas!

At the first drop of water from the skies, they all pack their bags and jewellery in a very efficient manner developed for running away from police (just grab the four ends of the sheet your stuff lies on, and stand up), and they rush in the streets with handfuls of umbrellas, quickly offering them to office workers, students and tourists altogether.

They may have only 5 minutes of rain, but they always sell loads of umbrellas! Although I don’t know at which price they’re selling them, I am sure they make good profits each time.

And then when the rain stops, they’re back to the pavement. Now that I am writing this, it dawns on me that they probably switch to selling fresh bottles of water in the hottest days of the year…

If that’s not agile, what is?

Sphere: Related Content