Follow @RoyOsherove on Twitter

The case for staged delivery and Agile methodologies

During the past week I’ve attended a .Net master class with Juval Lowy. One of the last items Juval talked about was his notion of software project management using Staged Delivery. Being a Agile advocate myself, I’ve outlined several of his points and wrote down several differences and similarities during the talk. I’ve written my thoughts on these issues in this article as a way to put extract the essence of my thoughts on this amazingly complex issue in writing so that I can better understand and share these things with myself and you, the reader. Please feel free to comment and share your thoughts as a comment or track back.


I’ve outlined the basic values that Juval talked about regarding his staged delivery approach below. Interestingly enough, there is a large correlation between he values of various Agile methodologies such as XP and Scrum to his world view. However there are several differences. I’ve decided to split the various values he talked about into 3 categories:

  1. General values
  2. Similar values to Agile
  3. Values that are different from Agile


I’m not going to pretend that I can give you all the details of the life cycle in a simple short article like this, but I’ll try to give you as much of the highlights as needed to make sense of the points I’m going to make.



General values:

  1. Staged delivery is all about iterative development and short release cycles.
  2. There are several stages.
    1. First, there is a big up front design process done by the architect. That design process can take up to 30% of the life cycle. The outcome of that design  is the various components, entities and interactions of the system.
    2. Every component has a life cycle:

                                                               i.      SRS document which is review

                                                             ii.      A test plan is generated

                                                            iii.      Do some construction of the component to flush out various difficulties that will come up later. (This is throw-away code)

                                                           iv.      Detailed design which is reviewed

                                                             v.      Development + test development

                                                           vi.      Code review

                                                          vii.      Integration testing

    1. This whole life cycle implies that even the simplest component can take on average a minimum of 3 weeks
  2. Team involvement:
    1. Estimation:

                                                               i.      The team is the best source for estimation purposes. Statistically Juval has found that the team produces the most accurate estimation results using the following process:

                                                             ii.      For each component/feature the whole team tells, one by one, how much they think it’s going to take to build

                                                            iii.      Put the results in an excel spreadsheet and calculate the average and the standard deviation

                                                           iv.      For each team member who thought that it would take more than the standard deviation, ask them why (they might have an insight no one thought about)

                                                             v.      After this, go around the room again and ask for estimation. After a couple of such rounds, the results will even out and you’ve got yourself an accurate as possible estimate

                                                           vi.      Lots of peer review is in the process

  1. Requirements
    1. Start with the custom/marketing/domain expert and derive a list of basic requirements
    2. Generate use cases
    3. Generate the domain model & interaction diagrams
    4. Generate architecture diagrams
    5. Generate the SDD documents
    6. Code
    7. Each step can back up to the previous step if things are not what they appear to be



OK. Those were the highlights, and certainly not a comprehensive list.  However, I think the next two categories will highlight many of the finer points in the methodology.


Similarity to the various Agile methodologies

  1. you can always change your mind

The whole process is built out of very frequent iterations and releases. At each point we can discover that the requirements have changed. If they do, we can tell the customer that there’s not problem. They should accept, however, that the schedule might be slipped or that other features might not make it into the overall release. It’s their choice.

In agile methodologies, the custom can always change their mid as well. However, this process can only take place *between* iterations (which can very from 2 weeks to 3 months to 6 months depending on the methodology you chose to implement)

  1. Daily build

Every day you build the whole source code and automatically run a smoke test. This provides daily integration and knowledge that you always have a release on your hands. Agile development (XP in particular is very specific about demanding this)  also has this notion of “continuous integration” though the daily build cycle is slightly different in that it provide more steps and integration points. I’ll outline these diffs in the next category.

  1. Spike solutions

During the component life cycle mentioned by Juval, there is a stage where there is a “throw away” code project that allows the developer to make sure they know what they are getting into. Agile development (XP and Scrum in particular) provide the idea of “spiking”  in order to uncover unknown project risks such as lack of technology knowledge or unforeseen integration problems.

  1. Design review

Each component has a design review before implementation. In Agile we also have a continuous design review process where each design change can and should be consulted with other members of the team.

  1. Testing
    1. Tests are written in conjunction  with the production code

In Juval’s words :”No line of code goes untested”. That means each little functionality that your class, object or component provides should be tested. Testing is done using Test clients (drivers) that allow the developer to trigger the various component’s actions with various values. Agile development (XP in particular) talks about writing unit tests that do exactly this, only they have no GUI, and are fully automated. More on this in the next section.

    1. The test plan is written along side the component design documents
    2. Every component has an emulator and a simulator

                                                               i.      Makes the developer provide interfaces early to each component

                                                             ii.      You can test against emulators to get hard coded results and exceptions

                                                            iii.      You can automate smoke tests

                                                           iv.      Every component has 3 versions: Real, emulator and simulator.

In agile development you use “Mock Objects” inside you unit tests to provide this same functionality.

  1. Team based estimation

As described earlier, The team is in charge of giving out estimated on development time. This is just like in Agile development, where no one but the development team can decide how much time something is going to take to build. The people on your team are individuals with different strengths and weaknesses. They know (and learn throughout the project) how much time it takes them to build things.

  1. Feature ownership

Because people on the team say how long it will take them to build stuff, they also feel more accountable on the products of their work. They now have more vested interest in the success of the product. In agile, all team members must also be committed to the final product delivery and other than estimating they also pick and sign on the tasks they would like to perform on each iteration. Essentially, the team drives itself into the finish line, guided by the things the customer wants the most for the specific iteration.

  1. Burn down chart

Just like in Scrum, there’s  a burn down chart that tells us when we expect to happen on the time vs. % done scale.

  1. Code
    1. Should be self explaining

That is, if you’re finding yourself writing a comment, most likely you’re not writing the code in a self explanatory way. There should be almost no comments except when absolutely necessary in the code. It should be read like a book.

In Agile, the production code and the unit tests are the best and most recommended form of documentation.

    1. This is not so say that there is no outside documentation of the API, but that topic resides in the “difference” category, later on.
  2. Client involvement is encouraged.

The more time the client spends with you, your team and the product features decisions, the better. In agile, and XP in particular, Customer-on-site is a must for resolving much of the communication issues between the team and the design decisions, the client and the priorities that have been decided. There is a difference however, in that the customer does not have to be there.

  1. One click to build, setup, deploy and test

These processes should be fully automated, so that integration can be as smooth as possible. Goes along with Agile’s notion of CI.

  1. Defensive programming

Assert as much as possible all the assumptions in your code. On average, every 5th line is an Assert. That’s the only way to flush out those bugs early and often. With Agile (XP, Test Driven Development) the Debug.Asserts are replaced/complemented with Unit tests that perform their own asserts.

  1. Overtime is evil

As a practice, you should never do overtime. Sure, sometimes its called for but if you’re doing it all the time than it’s not overtime, it’s bad management. Again, Agile supports this notion as well.

  1. Buddy Programming

Each developer has an assumed “buddy” developer hey can go to whenever they have a problem or want to get things done which are a little complicated to be doing alone. This “buddy system” is much like pair programming in XP, except it is less strict. When two people work on the same code you also avoid the “If he leaves we’re doomed” symptom. (Funny remark from one of the folks in the class was “Unfortunately buddies usually leave together”..  )

  1. Have a coding standard

Coding standards ease maintenance and productivity for developers across the project. Agile supports this notion as well.

  1. Demo and show the status of the product early and often.

Iterative development with small releases means that every short period of time we can show the customer a new version or a release of the product with more functionality. This lets them See what they really want out of the application and have a clearer understanding of their own requirements. Agile development also believes in showing progress early to the customer and all stakeholders.



So far it certainly seems as though the process Juval was talking about is Agile in many ways. But is it really subscribing to the four rules that all agile methodologies stand on:


l      Individuals and interactions over processes and tools

l      Working software over comprehensive documentation

l      Customer collaboration over contract negotiation

l      Responding to change over following a plan



Here are the differences (in highlights) that I’ve written down for myself during the talk:




  1. Big design up front

(Note: While th following paragraph details what I got from the master class, Brian Noyes, who works and teaches this class at IDesign, has some clarifications and comments in this post


A big design phase done by an architect that can take around 30% of the overall project time is done up front. This is a great departure from the “A simply as possible” approach taken by XP. Agile (XP in particular) believes in Iterative, evolutionist’s design approach where the design of the system is done incrementally as the system is being developed.  With up front design you spend a lot of time designing parts of the system that you can’t really tell if they will be integrated together well in the end. Usually, the passage from Design into implementation can be a dramatic stage where a lot of inconsistencies in the design model are uncovered. This isn’t the architect’s fault, though. Today’s software architects  try to mimic the work of engineering architects (like electronics, buildings) in that they believe that a design that is good enough can produce predictable work and schedule every time. The difference between engineering and software architecture is that the number of tools a software architect has in their disposal to make sure they are doing the right thing is minuscule in comparison to those given into the hands of the engineer. The engineer has mathematical analysis tools, pressure point simulations and pre-known information of building the exact same thing for hundreds of years. Software architects have peers to check their work and make sure things “seem OK”. As a consequence the design will almost always be less adequate that initially thought out to be and changes will have to take place when implementation time arrives.

  1. External API documentation (much like MSDN, NDoc..)

According to this approach you need external API documentation in order for new developers in your project to be easily integrated with your current team. You don’t want to re-explain every time how this class talks to that class, assumptions, calling conventions and so on. In agile, however (XP in particular) the Unit tests provide the perfect documentation o four APIs, assumptions and calling conventions, and there is less need for complicated documentation(and maintenance thereof) such as this.

  1. QA runs the automated test clients

The test clients that the developer creates for their components are run automatically by the QA team.  In Agile (XP),all the unit tests are run by the developers before even reaching QA. This allows for bugs to be discovered earlier in the loop.

  1. Design for performance

In the suggested methodology, you should always think about performance and design for higher performance than the one requested. That is, if the requested performance(or scalability) required is X, design for 10X (for example, up front design to support 10 times the number of clients required in 3 months). Again, a big departure from Agile where you only design for the simple cases and add scalability / performance when it is needed, and no sooner. Why? Because we want to deliver the best value for the customer, as soon as possible. In fact, many of the things we dim as “Needed” that were not in the requirements are actually never going to be used, and we’re wasting time implementing them when we could have been implemented a required set of features/functionality. Also, we’re making the code more complicated by adding this design up front and the code is then harder to maintain. Also, consider what happens when you spend an extra week building functionality that the client didn’t ask for because you thought “they’ll need it on the next version anyway”. A week later the customer’s business needs have changes the highest priority now becomes some feature which could have been already developed by now if you hadn’t been stuck on the performance improvement no one asked for.  Of course, this is a simple situation. This about times whee you spent months building an sophisticated scalable framework for you application which ended up supporting 20th of the users you designed it for? (And 20th of that was the actual required task). What would your manager want more? A 2 month schedule decrease or 2000% performance improvement over the required features? It’s all about delivering value to the client as soon as they need it.

  1. The client does not have to get involved if they don’t want to

No so in Agile. In agile the client is key for communication, feedback and prioritizing the features for the next iterations.

  1. Documentation contains context maps

Context maps tell the developers and stake holders how the system works in the context of Identity boundaries, threading, tasks, architecture and other things. No one in agile land talk about this, and I think it definitely should be on any project, agile or not.

  1. Lightweight XML comments on your Code

Simply for tool tip reasons for your developers, this is a good idea. While agile and XP in particular support the idea of least documentation as possible, I think this is still a good idea.

  1. Design is solely by the architect.

In agile the team has much more responsibility. Even in changing the existing design to fit the reality that has been encountered. Of course, this isn’t done alone, but the team has much more power because it’s all about the team, not the process in Agile development.

  1. Every class has its own spec

Even more documentation. In agile and XP specifically, the design of a class is done just-in-time using the interested parties using CRC cards.

  1. Aim to minimize the rework each component may need

Assuming the design is that good and that requirements never change, the component’s design should not change and work on it will be done just once. Again, a big departure from Agile’s “simplicity first, Refactor later” approach where the component’s iterative design can change the interfaces and implementation if needed. One can arguer however that a component in this sense is a black box with public interfaces, where only the inner implementation can change (think web services). In that case there is a middle ground in that you try to minimize as much as possible the change in the public interfaces of a system for integration purposes.




So what do we have here?  What can we deduct from this?

Obviously, the difference in approach between Agile methodologies and Staged Delivery as Juval portrays it is there, but its not as deep as you might originally assume. In fact Juval’s way of doing things is taking the best of both worlds (in his view) – Agile and Engineering, and making them work together using disciplined developer behavior every step of the way. I’m sure it works great. However, just as someone who believes in the Agile side of things should take a long hard look at this approach, so should the “other” side take a look at the way things are done in the agile camp. Both sides have their wins, just as they do their losses. Both can learn a great deal from each other. One of the things Agile is insistent upon is the ability to change the process until it fits your own development team/organization. This includes “borrowing” from other methodologies to make yourself more successful. If that proves right – aren’t we all just a little happier?


One of the things Juval talked about in relation to XP is that in order to advocate XP you need to be a zealot, religious person. I’d say that sounds a little like FUD. I’m in the agile camp but there’s nothing I like more than to listen to people telling me why they think they have a better way. Do I care that I was wrong as long as the end result is better? Not really. The only thing that I’ve learned so far is that I’m capable of making mistakes, but I’m a great learner from them. To be a zealot is to believe in something in spite of all the hard evidence, and that’s true only for the more extremist side of any bunch. As software developers we are more prone to logical conclusions. That is, if you explain it to us in a way that makes sense, there’s really no way we can say no. The matter of software development is so complex because it’s all about people. And no one gets people, really.  That’s why we all have different ways of looking into managing people processes. That’s why we should all keep an eye open to make sure we’re looking at reality and not just what we want our reality to be. Being open to new ideas and ways of doing things is part of the Agile way, and as such, looking at things that have proved successful should be just as important as looking at things that fail from within.


I’m purposely not trying to “judge” which is “better” or “more appropriate”, but merely trying to see the differences, so that it will be easier to pinpoint the exact locations where the methodologies make themselves stronger.  These “diff” points are what we all should be looking at, and ask each other “Why like this and not like that?”. Who knows, we might find a pleasant surprise.

The case for staged delivery and Agile methodologies

Manage many events easily with EventHandlerList