Database triggers. If you have spent any length of time within the Oracle development circle you might have noticed something strange: pretty much every expert – commentator, blogger, Oracle evangelist – has written an article condemning triggers. And every single time the comment section is flooded with developers/DBAs defending them.
I recently joined the fray, writing an article warning against triggers, The Problem With Triggers – and, predictably, in came the ‘unwashed masses’ defending triggers.
Well, I have a confession: I actually agree with them. I can see the case against triggers, but I can also see that a strong case can be made for them. In this article I will be making that case, relying partly on the comments I received on my earlier article.
Are Triggers Guns – Or Knives?
Let us start at the end, with the conclusion. No one is arguing that triggers are not powerful – and, in consequence, potentially dangerous. This is why trigger-haters say: Triggers are evil, ban them. In many ways, it is similar to the argument about guns. “Guns are dangerous, ban all guns.”
But what if triggers are more like … knives? Knives are dangerous, but no one wants to outlaw knives. We all agree that knives are dangerous in untrained hands, but very useful in trained hands.
So how can I prove that triggers are only knife-level dangerous? Let me respond to the arguments I made in my earlier article.
Triggers Fire Automagically
Because triggers fire automatically, they can sometimes seem like hidden anarchists lobbing Molotov cocktails into your code from the shadows.
The argument is that because triggers fire unbidden, they can be a risk, they get overlooked, and can go unplanned for.
However, Toon Koppelaars (whose excellent blog in defence of triggers was recommended to me by Martin Preiss) cuts this argument down at the knees: the default clause of table columns does stuff automagically, as do cascade delete foreign keys. How come no one’s getting a I Hate Default Values tattoo?
Triggers are a convenient hiding place for bugs; … no one ever remembers to look in triggers.
The argument here is that because we do not explicitly execute triggers it is much harder to track down bugs within them.
However, this is only true if your organisation does not follow a clearly-defined development approach, and you use triggers haphazardly. If your coding standards make triggers a prominently-utilised part of your methodology, they will be the first place you will seek out bugs.
Do not use triggers as an occasional whim; use them as a rampart of your database, or not at all.
The argument here is that if you make a call to, say,
utl_mail or a web service in a trigger and your transaction is later rolled back for some reason, those calls are irreversible.
However, Andrew McPherson points out that this “is more bad design than an example of a bad trigger”. He is correct. Detractors of triggers often advocate the creation and use of APIs; however, an aborted API call would have the exact same problem.
“If the [call] was implemented via a queue, then the trigger would not have the issues you specified,” McPherson explains.
Triggers vs APIs
Trigger advocates point out that there are risks in interfacing with your database solely through API calls. John Flack points out that it weakens your ability to fully utilise tools that enjoy a tight integration to the database. Oracle Forms and Apex, for example, effortlessly insert, update and delete from tables (and, therefore, work very well with triggers). Crowbarring in an API layer is fiddly and, some may argue, redundant.
Indeed Robert Schulke does make that argument: We are now in the business of writing something to substitute for the SQL language. This is properly addressed by the Department of Absurd Redundancy Department.
If I was starting today, I would definitely use fine grained auditing instead of triggers.
Andrew McPherson who, I have to admit, seems to be more familiar with the subject than I am, argues that auditors are more amenable to database triggers than to fine grained auditing.
Audit that is enforced by the database passes scrutiny much easier than audit by APIs. I have dealt with a number of high security databases where audit was significantly more important than the data itself. Sitting in meetings with auditors on at least an annual basis proving that the audit is complete is far from being a trivial task, and one that triggers makes much easier. Oracle’s fine grained auditing is a step up, but still not as flexible as a trigger.
I have worked for companies that rely heavily on triggers: I currently work for one that relies heavily on APIs. When I worked with triggers, I must admit that we occasionally ran into restrictions, and, more than once, butted up against the dreaded mutating table error. (Friendly tip: do not attempt to contort your code around it using autonomous transactions or statement-level triggers. That way leads to madness – and very badly-written code.)
Now that I work with APIs, we face different frustrations. Code takes longer to write, and our Apex integration is less vanilla.
But if I was starting from scratch tomorrow, which approach would I choose? APIs – but only by a whisker.