Roy Osherove

View Original

.Net Deep Dive Recap

Today was a long and cool day (Although it was hot outside).

The “.Net Deep Dive” full day event at the Hilton (a.k.a ‘Teched for the financially challenged”) took place today, and I got to attend four lectures.

3 of those were extra cool, and 1 was a bore from hell.

 

The first lecture was done by Yosi Taguri and was about advanced debugging techniques.

(btw, you might want to check out my .Net Debugging Resources page...)

 

Lots of cool stuff shown there. highlights:

·         Using  DBG files for production debugging

·         Reflector , and the Reflector Add-in

·         Trace & Debug differences

 

One of the best moments in that lecture was when Yosi introduced Reflector.

First, he gave a pretty a very exciting intro on how we all have Microsoft .Net source code right under our noses and we don’t even know it.

He then proceeded to show us the cool-tool in all its glory. He really knows how to capture a crowd.

One nice anecdote he mentioned was a problem that he had in one of the applications his team was building. Somewhere, It broke in the move to .Net 1.1.

He then proceeded to show the exact reason why it broke, by something that would never have been found without using a decompiler – Server.Transfer was passing (in the code inside the assembly) a Boolean flag to an overloaded version of itself which accepts that flag, only in ver. 1.0 it sent false as default to that method, and in 1.1 it sends true. Totally amazing. Seeing this code in reflector(both the 1.1 and the 1.0 version assemblies were loaded in reflector for side by side comparison of this method) was a very powerful presentation.

When he finally showed the link to reflector, you could hear the immediate paper and pen rustling of 200 attendees. Yeah, it was a captive audience.

Showing the Reflector Add-in while debugging a .net application from inside VS.NET was a pretty imressive deal as well, but the initial impression from reflector was much more powerful, and justly so. It is one amazing tool.

The screen looked a lot like this one, although there , the reflector window was *inside* VS.NET:

 

One comment about that presentation – He should have used bigger screen fonts.

 

Oh, he also mentioned .NetWeblogs, so expect a rush of Israeli server hits in the next couple of days J

Hopefully, more Israeli .net programmers will discover these weblogs for the valuable information resource they are and start tapping into the minds of some very interesting people and ideas.

 

The second lecture dealt with COM+ tips and tricks, but was so much a bore that I won’t even try to remember what the hell went on over there. People were yawning most of the time, and I think time stopped still for a few moments. Advice to lecturer: you should at least try to look like you are interested in the lecture you are giving.

 

The third lecture dealt with Caching patterns & Practices and was was a pretty fun deal.

Points of interest:

·         You can use the ASP.NET cache’s advanced functionality in non-web applications as well, just import System.Web and use HTTPRuntime.Cache.

·         The Caching Application Block looks to be one of the coolest new gadgets in town. It will work with SQL server, Memory-Mapped-Files and Static variables, all according to the needs you specify

·         There are specific patterns and caching strategies for any situation you’ll find yourself in. The caching architecture guide can be found in here. There is a specific process to be taken when deciding on a strategy – Decide on the Scope and staleness of your cache data demands, and based on these, you can find the right strategy to fit your needs. There are actually decision tables on their site(Chapter 3 I believe) that show you what to do. Very nice.

·         For each application scope needed, the cache should use a different state holder solution.

o        For AppDomain Scope (i.e. cachind data for your application internal use only) , static variables are the best.

o        For Machine Scope(i.e. multiple applications using the same cached data – Memory Mapped files

o        For Server Farm scope(i.e. Multiple machines using the same cached data) – SQL server persistent storage

·         Do not use a Remoting Singleton for your caching needs. It is not scalable, and performance is the worst of all the strategies mentioned above.

·         SQL server persistent storage is basically the only way to go for multi-machine scope caching (Which I find weird. I thought there were better solutions out there…)

·         Microsoft’s Patters & Practices team is always looking for input and feedback , to know what do developers really need, are they going in the right direction? What’s missing? Are they wrong about something? They should have all the feedback in the world IMHO, since the more they get from us, the more we get from them. Symbiosis is not an ugly word here…

 

 

The fourth lecture dealt with performance and speed improvements tips & tricks.

This was easily one of the best lectures I’ve attended. The lecturer was talking fast and to the point, much like what he was trying to explain to us.

·         If you really need something to work fast, and you don’t see any built-in solution – build it yourself – the ugly way. No two ways about that.

·         Know the performance hit of your code. Just thinking about boxing operations when using HashTables to lookup integers was enough to convince me that yeah, there’s lots of room for improvement if you really need to.

o        Why do you have boxing with HashTables? Because a HashTable accepts as key an object, so any int,long or souble you pass to it, is automatically boxed and wrapped, and becomes an object which represents an int, long or double. This takes plenty of time. You won’t always need to get rid of that performance hit, but if and when you do, its good to know that the only way to avoid this, is building your own implementation of a HashTable.

·         If you only have one CPU on the machine, splitting a task to four different threads won’t help you, it’ll only increase the amount of time to finish the task. Always consider the amount of processes for multithreaded tasks

·         COM Interop is a big performance hit. Sometimes you’ll need to come up with some ugly stuff to avoind conversion of data structures beween your layer and the other layer (think .Net arrays Vs. C arrays)

·         ThreadPool Rocks

·         Object Pooling Rocks