I was scanning the API of DacFx, the ‘engine’ of SSDT, and became interested in the facility it contains for automating SQL code reviews. DacFx allows you to parse the SQL code sufficiently to do static code analysis, to scan for heresies, deprecated code and code that doesn’t ‘conform to corporate policy’. Dave Ballantyne has used this feature to allow DacFx to detect SQL code smells.
I like ‘code smells‘ when used by programmers to sharpen up their own code. I have, however, always been sceptical about relying on automated code reviews, and deeply suspicious of ‘code policies’. Such things are fine, I suppose, as a double-check, safety net, or final precaution upon check-in, or perhaps the nightly build if one has failed to spot a problem in the quality of code. If, however, you are waiting until release to ‘enforce company policies on code quality’ like some SQL Thought Police, that smacks of the worst form of management of the development process. You’re catching these problems too late in the process, and giving the wrong message to the developer. Code quality is a cultural thing.
Code must be checked. Ostensibly, it is to advise and to assist a team member, and help them to promote their professional expertise. However, If you have the responsibility of supervising work, you also have an obligation to your employer to check code. Check for what? Although the general answer has to be quality, all manner of problems such as design, localization, performance, and security can be prevented by quiet tactful supervision. Yes, security too: I’ve certainly experienced anarchic development departments whose output has to be treated as potentially toxic by other parts of the IT department, where SQL Code has to be checked for concealed permissions or roles being created, and procedural code has to be sanity-checked for foolishness and malice. The result is gridlock.
Although there are general software metrics for code quality, such as Cyclomatic complexity (checking complexity and structure), model checking or data-flow analysis, and techniques for checking for both program and security vulnerabilities, I haven’t yet seen them catch anything that couldn’t be detected by eye. In procedural code, metrics like Cyclomatic complexity can warn you of trends such as the growth of technical debt, but tell you far less about individual sections of code. I’ve yet to see a code quality metric transfer successfully to SQL Code.
From my own experience, I’d always choose code peer-review, where we check and comment on each other’s code. Of course, the code must be in source control, it needs a simple workflow, annotation and messaging system, and the process itself must be supervised, but wherever I’ve seen it introduced, it has detected potential problems at a time they can easily be avoided. What is more, the trained eye can spot problems far quicker and effectively than an automated code-review process.