In the Windows environment, there seems little safer for application design than a rather staid single-tiered architecture making ODBC/JDBC calls to the RDBMS. I can say this with years of experience in developing applications ranging from the dull but worthy, to the esoteric. However there is an interesting long-term cost to taking the easy route to delivering an application, particularly where the database server ends up evolving into a behemoth: a monster that is shared by a number of applications and is the source of downstream reporting and analysis.
The advantages in growing a database server to accommodate much of the data of an enterprise are obvious. Because the duplication of data is avoided, the transfer of data across networks is minimised. Distributed transactions are avoided, and you aren’t compelled to tackle the complexities of secure robust messaging. You can safely place business logic in the database where it can be shared by any application that needs it. I’ve seen relatively modest databases grow alarmingly into Godzillas in consequence of the inescapable fact that this type of architecture represents the fastest and simplest way of delivering functionality to the organisation.
There are disadvantages to such an architecture where the database becomes the hub. The symptoms show up when the deployment of individual applications slows to a crawl, where new applications can’t be easily accommodated, or where locking and blocking becomes a major problem. The cause? Databases that serve several applications and processes simultaneously have to be developed with a very different mind-set. I call it ‘neighbourliness’. The parsimonious use of CPU and I/O; the care over shared resources and craftsmanship over performance; the careful encapsulation of logic, and parsimonious use of ‘proxys’ and abstraction to prevent the close-coupling of different component parts of the database; Good ‘instrumentation’ at the application level; The use of interfaces that are defined and ‘versioned’; The fastidious archiving of obsolete data. It is all about cohabiting in a multiuser parallel world. Where, alternatively, databases have just evolved over time in response to immediate requirements, they become like the neighbours from hell, and the working server becomes too fragile, with too many interdependencies to update easily. Maybe it isn’t the technology, but the way we use it.
Many developers rage over the tyranny of the ‘monolithic’ database, and look to drastic solutions such as a service-oriented ‘microservice’ approach: This, surely, isn’t a headache reducer but more of a headache-replacer. Perhaps the simpler answer is to identify, develop and promulgate practices and techniques that allow enterprise-scale databases to provide the flexibility and rapid development that businesses typically require: to design and build such systems to be intrinsically ‘deployable’ right around the whole database lifecycle?
Load comments