The idea of ‘instrumenting’ an application often seems to puzzle application developers. Modern IDEs are so full of code-tracers, debuggers and profilers that developers seem to resent the idea that applications should be designed and developed so as to be measurable.
The application’s methods need to be instrumented so that, on demand, the application can provide sufficient information to review the performance or determine the events that led to failure. This information is vital not only during development, but also when the application is under test, and often when it is in production. Instrumentation is also one of those techniques that, over time, transforms the task of deployment from requiring late-night heroics to being routine and predictable.
Database developers have long had a culture of ‘instrumentation’ because the interface is so spartan. My old friend and Oracle mentor, Tom Kyte, who taught me as much about databases as anyone, was an instrumentation fanatic and to my knowledge never built an Oracle application that, as a result, wasn’t still going strong at least 10 years later.
Every application he ever developed would be sprinkled liberally with debug code so that at the flick of ‘switch’ (i.e. a parameter value, stored in a configuration file), he could enable a wealth of diagnostic information for that database, or application module or web page. He could remote debug any of his applications, and tell you in minutes the likely cause of slow performance. To those who complained that debug code added needless overhead, his response was simple: instrumentation is not “overhead”. Overhead is something you can remove without losing vital functionality.
Nowadays, because we are more aware of the whole lifecycle of applications, the requirements of operations people are being increasingly designed into the development process. On the Windows platform, Performance Counters are the IT Pro’s stethoscope, and PerfMon is “the machine that goes ping”. There are plenty of third-party monitoring systems that use performance counters. It is hard to find a server process that doesn’t offer an impressive range of counters with which to check the health of the system.
One can get a long way just by enabling collection of PerfMon counters within the methods of your application. They’re also a great way to start instrumenting server-based applications. You can analyze them in SQL Server and in PowerShell; You can build performance counter metrics into ASP.NET MVC applications or even use PowerShell to publish custom performance counters.
If your server-based application isn’t able to provide decent metrics, then it’s not because there’s no information describing how to do it. So why is it so rare to find a third-party server process such as a database or website that can provide the metrics that will illuminate what is going on in terms of business transactions, concurrent users, products being viewed, information requests and so on? I’d love to hear your views.