Follow @RoyOsherove on Twitter

CLR Performance Q&A, and how it relates to unit testing and mock objects

The info is mainly from a CLR perf chat that was held lately, and you should really read some of the stuff in there.
I was actually asked to do a talk about performance the other day and declined it on the grounds that it's really not an area I am an expert on, which I plan to change.
 
Getting back to the matter at hand, One of the questions posted there went like this:
 
Rico Mariani (Expert):
Q: When refactoring code - are there performance penalties for over-'functionizing' an application?
 
Surely, an important question. In fact, one of the things I do most often is refactor my code to have more clear and explicit functionality. It removes code duplicates and I'm all about the DRY principle (don't repeat yourself) personally.
 
The answer was:
A: There sure can be. One of the guys that works on profiling tools here says that he sees -- in object oriented languages -- this tendancy to write what he calls "work averse functions." What he means by this is that everything is so factored that there are many many functions each of which does very little and then passes it on to the next function. In straight C programmers don't tend to code that way. The work-averse functions translate directly into much deeper callstacks and much more function call and return overhead. Partly because they're in the worst of all worlds... too big to be inlined yet not big enough to be doing meaty amounts of work. So be careful. Factoring is good but don't go crazy -- you get oophalism that way
A: Yes. Definatelly. Too many short methods that do liitle work called deep on the callstack can significantly affect performance. You should be careful about overfactorizing methods that get called often (in deep callstak loops)
 
When you think about it, it actually makes a lot of sense. Still, I'd hate to think that now refactoring might be considered "optional". Most of the applications I'm dealing with are not real-time applications. Sure, they have their performance requirements ("Customer should not wait two minutes for data screen to load" etc..) but usually I find performance and optimization to be last on my list (unless it's a core requirement).
I do believe that better code readability and easier maintenance of your code a one of the most important goals we can set for ourselves as developers. The creation phase of an application is just step one in what is usually (and in most projects also painfully) along maintenance and feature addition lifetime for the new application. It's that part that we need to set our sites on no less than part one. If we don't we get to "Hell Stage". You know hell stage right? It's that stage where it's easier to rewrite the whole application or feature than it would be to maintain it.
 
So when I read stuff like this I need to remind myself that refactoring is good, unless it's not. That is - refactoring should be the rule, and inlining your code should be the exception. If your application is in  such a stage where inlining methods is the one thing that will save it from performance hell, and it's not a real-time application - you might need to re-think your design, or think about a different way of optimizing it.
I mean, if you're talking to the database, or your hard drive, you're immediately taking LOTS of time to do things. code inlining is the LEAST of your problems.
 
Here's an interesting tidbit that happened to me some time ago. I was teaching C++ developers to do unit testing and test-driven development. It was an interesting class. On day two we started delving into the concept of "Mock Objects".
Just a small intro on this: Mock objects are a way for us developers to "replace" existing application classes and objects with our own classes. This if beneficial for various testing scenarios, for example,  testing interactions between related objects, or replacing an expensive or time-consuming processing object with an empty shell that helps our tests run faster.
Anyway. Mock objects (and most Mock object frameworks out there) require that the object that you would like to replace have a specific interface. It's a legitimate requirement because in order for use to be able to replace an object (without doing any late binding tricks) we need to support the public contract it defines: it's interface.
 
One of the things the C++ developers told me was that this might pose a problem. A problem because using interfaces, objects would now not be calling directly into method pointers, but that another level of indirection would be added to the call stack and no code inlining could be done. This could seriously hurt performance, they said.
Turns out that some of them were indeed real-time developers, but most of them were not.
 
(A little C++ technical info if you're interested. If not, skip this paragraph)
One of the solutions to overcoming this in C++ would be to not use mock objects, but replace the library where the objects reside with your own library of objects (just before the LINK stage). That way calls are not virtual and can be inlined.
 
I assured them that, except the real-time developers, the IT developers there had little to worry about. If their application is slow, it most probably is because of numerous reasons *before* even looking at the code inlining issue.
 

the CSS maze

Natty Gur on Enterprise Architecture