ICONIX: Life after Extreme Programming
Software projects fail for all sorts of reasons, although the same reasons crop up over and over again. The Standish Group’s CHAOS Report highlights “lack of user input” as the primary reason for project failure. However, “analysis paralysis” can also sink projects. The ICONIX process might just provide a way out.
When a team recognizes that a project is floundering, they will often adopt a software development process, believing, in good faith, that it will show them the right path to follow. Herein lies the problem: most “traditional” software processes tell you to throw everything plus the kitchen sink into the analysis and design stages, meaning that not a single line of code gets written for several months (if ever). It’s scary, but it does happen. If you find yourself embroiled in an endless debate about whether to use includes or extends in your UML use case diagrams you know that analysis paralysis has set in.
Fundamentally, developing software is about getting from a start point (A), a set of requirements, to the target (B), a finished piece of software that fulfils those requirements. The trouble is that the target is often a moving one because requirements change over time; so a process which sets the requirements in stone is doomed to fail. Conversely, a process which lets you change anything on the basis that “change is free” is an open invitation to scope and budget creep.
Agile development is very much a reaction to the “heavyweight” methodologies that tell you that you must write a 200-page requirements document before writing a single line of code. With such a monolithic requirements spec, it’s almost impossible to determine whether all the requirements have been met. Many, many projects have taken this route over the last 30 years, and a scary number of them have failed. Given this, it’s no small wonder that agile development has become so popular, with its promise of:
- Positively steering teams away from analysis paralysis.
- Accepting unpredictability, and therefore making adaptability a key process requirement.
- Encouraging communication with the customer so that there’s less chance of misunderstood requirements.
There are many different agile development processes: Feature Driven Development (FDD), Extreme Programming (XP), the ICONIX Process (my favorite, but I’m biased), Crystal Clear, Crystal Orange, DSDM, to name just a few. By virtue of being an agile process, each one claims to achieve the same set of agile goals- although each one takes a different approach. Choosing which one is best for your project is almost like choosing a new religion. It’s a complex decision with many leaps of faith. XP is currently the most popular of the agile processes – it is certainly the noisiest by far. However, in my opinion, XP can also be one of the most dangerous of the agile processes because in many repsects it attacks the problem of “heavyweight” development processes from the wrong angles.
Extreme Programming: the Good, the Bad and the Ugly
XP is a lightweight software development process that aims to reduce risk by placing working software in front of end-users as early as possible. It embraces a number of core rules with regard to coding, testing, planning and design. In this section, I will consider the following fundamental tenets of XP and investigate their pros and cons:
- Unit Testing
- Small Releases
- Lightweight requirements analysis based around user stories
- Pair Programming
There’s much more to consider than can fit into a single article, but this should still give you a good idea of the pros and cons of XP. If you’re interested in a more in-depth analysis, I’ll refer you to my book Extreme Programming Refactored. More information about XP, in general, can be found in Kent Beck’s book, Extreme Programming Explained.
XP deserves credit for raising the profile of agile development, and popularizing the concept of writing unit tests – an essential aspect of software development and one that was sorely neglected up until a few years ago.
A unit test is a piece of code whose sole purpose is to test another piece of code: the test passes in a value, and asserts that the value that it gets back out is what was expected. In this sense, unit testing is “black-box” testing because it is primarily concerned with the inputs and outputs of a class or method, not with the method’s “internals”. In XP, unit testing is wrapped up into a practice called Test-Driven Development (TDD), which is also sufficiently self-contained to be used in non-XP projects. The premise behind TDD is that you use tests to design the software – you write the unit tests before writing the code, and you do this in a very fine-grained manner. You write a test, write the code to make the test pass; write the next test, and so on; all the while using feedback from the previous tests to drive the next one.
XP purists are very fastidious about this feedback cycle. Once a test is passed, a new test must be written and the code “refactored” to pass that new test. Even Microsoft took it on the chin when they attempted to adopt TDD and proved that they had seriously missed this point. Their Guidelines for Test-Driven Development article was quickly dropped after attracting heavy criticism around the MS-following blogosphere. The style of TDD that they described involved writing batches of tests before writing any code and therefore was “too removed from the code” and “missed the critical importance of the feedback cycle”.
In non-XP projects (such as those using the ICONIX Process), the concept of “prefactoring” rather than refactoring is used: that is, spending a little more time getting the design right before you begin writing code – this allows you to write more tests, but without being too far removed from the feedback cycle (i.e. it’s more efficient).
XP aims to put a simple system into production quickly, and then put out new versions on a very short release cycle (e.g. 1-2 weeks). The typical 2-week iteration doesn’t necessarily result in a production release (i.e. a program that will be used in a live system by “real” users). It may instead be an “internal” release intended to get early feedback from the customer.
Small releases are a common feature in all agile processes, though they tend to differ according to each process’ core philosophy. For example, Scrum uses a one-month release cycle, which many developers find to be more maintainable over a long-term project.
Lightweight Requirements Analysis
In its drive to reduce the documentation burden on the developer, XP champions a less formal approach to requirements gathering, based around user stories. The goal is to produce a minimal, working version of the software as quickly as possible. Then, based on continual customer feedback (XP insists on having an on-site customer team equal to or larger than the team of programmers), the requirements evolve. Emphasis is placed on the features that are really fundamental to the customer. These are then honed and improved in an iterative, refactoring process. Or so the theory goes.
The upside of this approach is that XP has helped to dispel the popularly held belief that the way to prevent software projects from failing is to pile several layers of documentation on top of the programmers and watch them suffocate. However, not everything about the “old ways” of developing software was entirely broken. Analyzing the requirements in detail before designing the software was (and still is) an essential step. It prevents the development team from thinking they’re making lots of progress when in fact they’re just running very hard in the wrong direction.
The problem is that if requirements don’t get explored in enough detail at the start, then new details will be discovered after a significant amount of code has been written – meaning code needs to be rewritten. In XP this is dressed up as refactoring and “emergent design”, but to many of us it just means “Constant Refactoring After Programming” (roll your own acronym) and the project taking a tortuous, meandering route to conclusion.
The fact that projects tend to evolve and meander also makes it harder to truly assess whether XP makes a team more productive. XP’s requirements gathering approach also relies on a very programmer-centric view of the world. Even the customer is encouraged to script the requirements in executable test form.
Ultimately, XP’s encouragement of an approach to software development which doesn’t sufficiently explore the customer’s business requirements before the team starts coding is, in my view, its biggest failing. As a result, in their efforts to create working software in the first week or two of the project, teams tend not to explore the all-important “rainy day” scenarios. Many of the so-called “hidden requirements” that are often only discovered much later were really there waiting to be found all along. The phrase “sweeping problems under the carpet” springs to mind.
My co-author Doug Rosenberg had a major client who used XP for many years across multiple projects (before adopting ICONIX). After a spell of following XP’s twin principles of YAGNI (“You Aren’t Gonna Need It”) with their requirements, and DTSTTCPW (“Do The Simplest Thing That Could Possibly Work”) whilst coding, they delivered code written to “scotch tape and bubblegum” architectures, resulting in (in the client’s words) “hordes of angry, bloodthirsty dust bunnies emerging from under the carpet.”
In one case, a piece of prototype code that was rushed into production when it wasn’t ready (i.e. “rabid prototyping” as opposed to the genuinely useful “rapid prototyping”) resulted in catastrophic loss of data which led to a billion dollar business unit being completely shut down for several days.
While leaping into code and patching the design as you go along might seem like a great idea, because teams begin programming earlier, it can actually store up a lot of trouble.
XP places a heavy emphasis on pair programming whereby two people sit at the same PC. One “drives” (taps away at the keyboard) and the other “navigates” (watches the screen and engages the driver in conversation). Pairs switch partners often so that they don’t become stale and to reduce the negative effects of ill-matched pairings.
Pair programming is seen by its proponents as a sort of continuous code review that drives up code quality. Of course, the downside is that you get two programmers doing the work of one; and the jury’s still out on whether pair programming is cost-effective.
XP’s highly social collection of values and practices aren’t suited to everyone. They are well suited to extrovert, noisy types who like to talk all day, then clock off at 5PM (XP extols a strong 9-to-5 culture). However, many programmers are introverted, and (oddly enough) prefer a quiet place to think when programming. These individuals also don’t respond well to XP’s high dependency on oral documentation (the practice of keeping project knowledge, including the requirements details, alive by making sure everyone keeps talking about them). An XP shop can be a noisy, highly animated place.
While pair programming advocates claim that they would never return to solo programming, many programmers I’ve spoken to shudder visibly at the thought of spending each day in such close proximity to their peers, never really able to shut themselves away into the intellectual, meditative, deep-thought “flow” state that is required for programming. Being involved in a noisy XP project when you really just want to spend some time thinking about a design can be really frustrating (I’m talking from experience here).
On Digg.com in January, someone posted this message in response to a news item about the rules of XP:
“Heck, I *picked* my wife and I don’t think I’d enjoy sitting shoulder-to-shoulder with her all day to write programs, how is it supposed to work when a pointy-haired-boss pairs you up with someone?”
Salvaging the Good, Abandoning the Bad
Although XP is fatally flawed, in my opinion, there are some things that it gets right and that we can take forward:
- Unit testing (especially the practice of writing tests before the code).
- Encouraging teams to communicate and collaborate more throughout the project (not just in a big “requirements and design workshop” held at the start of the project).
- Agile planning: in particular this means releasing working software in small increments.
As to the rest (pair programming, refactoring and so forth) you could probably “take it or leave it”, depending in large part on the nature of your project. But to lower the risk, you would also need to put in place the practices that I describe in the next section.
What it boils down to is that the agile goals of XP are sound: they have their hearts in the right place. If you focus on the same failure modes that XP originally set out to address, but learn the painful lessons of what can go wrong with XP, you can end up with an agile process with a focus on disambiguating the requirements early in the project, and on creating project documentation which is minimal yet sufficient.
ICONIX: Cookbook Agility
At around the same time that XP became popular, the book Use Case Driven Object Modeling with UML (by Doug Rosenberg and Kendall Scott) was released. This slim volume describes the ICONIX Process. It shows how to get from use cases (aka behavioural requirements) to source code, via a minimal, core subset of UML diagrams, and in the process tying your use cases very closely to the objects that you’ll be using in your design. It allows for proper analysis while avoiding the dreaded project-stopper, analysis paralysis.
The ICONIX Process in a Nutshell
Fundamentally, the ICONIX Process is about understanding and documenting the user’s behaviour requirements, rooting out ambiguity in these requirements, and then using them to drive a good clean OO design. As such, it’s also about crossing the great chasm between analysis and design, something which many development processes tend to gloss over.
The core activity which the ICONIX Process uses to get from analysis to design is a little-known (but highly effective) technique called robustness analysis. In many ways this is the ICONIX Process’ “hidden weapon”.
Robustness analysis is a collaborative activity which involves drawing pictures of use cases, in effect tying the use cases to your OO model. It’s this process that reveals ambiguous statements in the use case text and helps to discover gaps (often huge swathes of missing functionality) in the requirements, which otherwise wouldn’t have been discovered until the team was well under way coding.
The ICONIX process can be described as follows:
The basic premise of the ICONIX Process is that, before you code, you should:
- Correctly and unambiguously understand the user’s behaviour.
- Discover all the classes necessary to support the user’s behaviour.
- Discover all the software functions necessary to support the user’s behaviour.
- Do a good, clean, responsibility-driven allocation of software functions to classes.
It is that final point – mapping software functions to classes – that is neglected in other methodologies and as a consequence is often “horsed up”. So with the ICONIX Process, we proceed step-by-step:
- Disambiguate the behaviour requirements (i.e. the use cases)
- Discover entity classes that participate in the use case
- Identify screens and UI elements that participate in the use case
- Identify all the logical bits of behaviour
- Allocate operations to classes
- Define input/output parameters
Steps 1 – 4 are accomplished with use case text and robustness diagrams; 5 is accomplished using sequence diagrams and class diagrams. Step 6 can also be done on the sequence diagrams (as a second pass, perhaps) or can be done with the test-driven mindset in code.
In Agile Development with ICONIX Process, we clearly tie the ICONIX modelling approach into unit testing – a combination that we call Design Driven Testing. The process involves systematically driving your unit tests from your analysis-level robustness diagrams, to make sure that all the “rainy day scenarios” are accounted for in your code.
ICONIX in an Agile World
As I noted at the start of this article, there are many agile processes to choose from, and many of them place a greater emphasis on analysis and design modelling than XP, and don’t have the same allergy towards written documentation that XP seems to have. Programmers who dislike the social aspects of XP should also have a better time with any of these other processes.
However, I believe that ICONIX is the process that most effectively ties together the analysis and design phases. The highly cohesive ICONIX design process, described above, discovers the “hidden” requirements early on in the project, before reams of dependent code have been written. The result is that the need for refactoring is vastly reduced, but agility is retained.
In other ways, the ICONIX Process is more comparable to XP. Both processes see communication between team members (and the customer for that matter) as essential. Analysis and design in the ICONIX Process are highly collaborative activities, with teams quickly scribbling robustness diagrams on a whiteboard (or, more effectively, using modelling software and an overhead projector).
Once you have that design, then the choice of whether or not to adopt other XP practices, such as pair programming, is a decision you can make with greatly reduced risk. The critical part is that programmers can, if they prefer, “solo-program” from these collaborative designs.
Development processes can be thought of as being on a scale from “hacking” at one end, to heavyweight, “high ceremony” at the other. Many believe that XP has pushed the slider a little too far towards hacking. For most developers, the ideal agile process would be nearer the middle of the scale, perhaps slightly left-of-middle – definitely not high ceremony, but also not hacking.
I believe that ICONIX is just such a process. It isn’t a silver bullet by any means, but I firmly believe that applying the ICONIX Process to your project will significantly improve the project’s chances of success. The process’ creator, Doug Rosenberg, has seen it used on hundreds of projects, and seen first-hand that it’s possible to avoid analysis paralysis whilst still doing sufficient up-front work to create a design which is closely tied in with the requirements. Ultimately, placing an emphasis on writing clean, unambiguous behaviour requirements and then using them to drive your design might just save your project.