Follow @RoyOsherove on Twitter

MSF is a fraud? A response

Joel writes some stuff in Hebrew and asks the readers to help him translate them.

In the translation post, (which is a good read) joel posts about what he feels the process should be like when he reports a bug in the software:

For example, if I assign a bug to a developer I expect them to:

  1. reproduce the bug
  2. if it's not immediately reproducible, make a good faith effort to figure out why it's happening to me instead of just assuming that I'm doped up on anti-allergy medication and hallucinating it
  3. find the root cause
  4. do some searches to see if the same errors were made elsewhere in the code
  5. fix them all
  6. test the fix
  7. think about whether this bug might be causing serious implications for a customer who needs to be told about the fix
  8. etc.

Here’s how this would look on a project that contains Unit tests for the code (hopefully was written Test-first thus we have good code coverage on the unit tests)  where the developer who gets the bug uses Test Driven Development:

 

  1. The developer and their pair(pair programming, remember that?) sit at the machine
  2. They run all the unit tests and make sure they all pass
  3. If some don’t pass, they try and see if this has anything to do with the reported bug. In any case, all tests should run and pass before the bug can be fixed.
  4. Create a test that reproduces the bug (that is, fails when the bug happens)
  5. if it's not immediately reproducible the pair contacts Joel and the three sit together to figure out what would be the best test that would force this bug to reveal itself consistently
  6. The bug appears in the form of a failing test
  7. Make the test pass as simply as possible in the production code
  8. Run all the unit tests and make sure they all pass
  9. If some fail try to make them all work as simply as possible or find some other way to fix the original failing test
  10. When all test pass try to think of other places this code might have a bug in with  this problem.
  11. If it does flush those places out too with failing tests
  12. Make them all work as simply as possible just like you did with the first test that failed
  13. When all test pass go and refactor your code to remove duplication: make all the changes you made point to a single place where the fixed code reside in, thus remove the need to maintain all these places and have the fixed code written only once.
  14. Make sure all tests still pass
  15. Get the latest version of the code from source control and run the tests with the latest version of everyone’s code.
  16. Make sure it all passes.
  17. Check in and see if the build of the system isn’t broken by your new code (your build server should have an automatic build feature which builds the code and runs all unit tests on a check-in action).
  18. all unit tests with the checked in code from source control should all pass
  19. If they don’t go back and fix it until it does. Repeat until integration is successful
  20. Report bug as resolved

Sounds like a lot of work? You might find that this work (integration and all the other stuff) is done anyway , only in this way we are flushing out integration problems very early and not waiting for QA to tell us something was wrong with the latest build. SO you’re basically choosing if you want to do the integration very soon after making the code work, or much later when a lot of changes have taken place and it will be much harder to find any code that broke the build. Your choice.

 

More stuff about the post that triggered this writing:

 

Lastly there's MSF. The author's complaint about methodologies is that they essentially transform people into compliance monkeys. "our system isn't working" -- "but we signed all the phase exits!". Intuitively, there is SOME truth in that. Any methodology that aims to promote consistency essentially has to cater to a lowest common denominator. The concept of a "repeatable process" implies that while all people are not the same, they can all produce the same way, and should all be monitored similarly. For instance, in software development, we like to have people unit-test their code. However, a good, experienced developer is about 100 times less likely to write bugs that will be uncovered during unit tests than a beginner. It is therefore practically useless for the former to write these... but most methodologies would enforce that he has to, or else you don't pass some phase. At that point, he's spending say 30% of his time on something essentially useless, which demotivates him. Since he isn't motivated to develop aggressively, he'll start giving large estimates, then not doing much, and perform his 9-5 duties to the letter. Project in crisis? Well, I did my unit tests. The rough translation of his sentence is: "methodologies encourage rock stars to become compliance monkeys, and I need everyone on my team to be a rock star".

 

Couldn’t this be said for *any* methdology out there, be it agile, XP, MSF, Waterfall? Sure. I don’t believe MSF is the problem here – in fact I believe that it’s a management problem – if you stuff a methodology down people’s throats – they won’t accept it even if it was “Do nothing but sit around all day, and fill out 3 reports” – it’s a human thing.

Any methodology can work if the people are willing to use it and want to use it and embrace it. What you’re saying here is that most people just aren’t good enough for these methodologies, that only smart people (that can come up with these things) can use it successfully. Not true. With proper guidance and good will it can be successful implemented in an organization.

This post feels more like FUD than anything else to me – picking on a MSF is wrong because MSF (especially MSF-Agile) embraces change and is all about making he methodology work for your practices and not the other way around. You can’t dismiss something just because you don’t know enough about it. Agile methodologies are there simply because *change* and *the human factor* are the most important things in the process of software development.

The only thing left in this equation to make these methodologies work for the organization (and Joel in particular) is to actually hire the right people who are indeed “rosh gadol” – these are the people that will gladly embrace a methodology once they believe in it – and thus bring eve more value than they are now.

 

And here’s another sentence that smells of FUD:

 

“For instance, in software development, we like to have people unit-test their code. However, a good, experienced developer is about 100 times less likely to write bugs that will be uncovered during unit tests than a beginner. It is therefore practically useless for the former to write these... but most methodologies would enforce that he has to, or else you don't pass some phase.”

 

I believe one of the most mature properties of an experienced developer is that they know what they don’t know, and they know what happens when they make assumptions about the code. But its hard to explain this even to experienced developers that even when you’re sure your assumptions are correct, you could be very very wrong. Whenever I teach hands-on Test Driven Development, I make sure that each of the attendees actually goes in and writes test even for the most trivial of code. Even with methods that parse out the shortest string, we find bugs. Methods that have 2-3 lines of code turn out to have simple 1-off bugs (“I forgot to add a  delimiter, The array is accessed one index further than it should be and so on..). You all know these things.

For a developer to say “I don’t need to test this – It works because I wrote” have surprises coming. Not only that – when they do not write the unit tests, they are losing some of the biggest benefits that unis tests give us (besides checking for bugs):

-         Save money.They provide a safety net so that we can go in an change(refactor) our codebase at any given time knowing that if something breaks, we’ll know about it right away

-         Save money.We actually write the tests instead of dimissing them for lack of time, or lack of value – this means better code coverage  and more bugs that we find earlier. Plus it leads to developers that don’t say “It worked on my machine” but instead say “It passes all the tests on the build server – it works”.

-         Save money. Finding those 1-off bugs early in the cycle saves a lot of debugging time – that’s money in the bank. Do the math – if I don’t find that bug today (which it would take a 5 minute investment to write a test for) I’ll need to find it later. Sometimes much later, and you all know that the cost of fixing a bug increases in orders of magnitude as we approach later stages of testing and deployment, right? So let’s say that now in a later stage it could take me anywhere from 15 minutes to 12 hours to locate that bug (because I might need to debug from the top-down into the logic classes from the GUI and so on). Multiply that by the number of these small bugs that you find all the time and now calculate how much is a developer hour. Got it?

 

Small but useful SQL tips from SQLJunkies

Mockpp - Mock Objects for C++