DevOps and Database Lifecycle Management

Database Lifecycle Management (DLM) aims to provide a roadmap of what is required, and when, to deliver and maintain effective databases that can respond quickly to business change. How does the DevOps movement, as it applies to databases, fit into this? William Brewer explains how DevOps provides the organisational change between delivery and operations to make important parts of the process easier to introduce.

DevOps, Continuous Delivery & Database Lifecycle Management
Culture and Organization

As internet commerce took off spectacularly in the ‘noughties’, a new breed of application became more common, one that was not only the public face of the organization but was actually intrinsic to the organization.

The organization was, effectively, the application but unlike the classic applications of commerce, these internet applications had to evolve continuously in the face of competition and demand from users. They did not conform to the accepted lifecycle of previous applications, and could not afford any relationship with the business other than total involvement.

The traditional divisions and practices of established IT simply weren’t appropriate any more. A new way of team-working had to evolve to make it possible to develop applications rapidly and deliver new functionality with as little delay as possible. DevOps in this context wasn’t just a nice idea, it was essential.

Today, the majority of organizations aren’t in this predicament. Instead of focusing on a handful of applications under continuous development, with a plethora of Business Intelligence satellites, most applications have a considerable operational task with clear-cut objectives and constraints. In many cases, the delivery activity is focused on relating the data-flow between bought-in applications, and on both reporting and BI. The creation of new business applications is far less crucial to the business.

In this sort of setting, DevOps isn’t essential, but does it represent good practice? The answer must be that the DevOps movement has developed techniques to solve the challenges of rapid delivery, and these techniques have turned out to be generally useful. DevOps practices have become, in many cases, mainstream. Techniques such as continuous deployment have  helped a large proportion of the  industry. 

DevOps, originally known as ‘Agile Infrastructure’, started as a groundswell of frustration at the entrenched segregation and autonomy of the three application-oriented activities within IT: delivery, operations and governance. There was no established methodology that dictated such an ‘iron curtain’. On the contrary, methodologies such as Catalyst had been advocating a more cooperative cross-specialism approach for over 20 years.

None of the development methodologies such as SSADM advocated this silo mentality either. No ‘waterfall’ approach had actually insisted on a Chinese wall between operations and delivery. However, IT practices had become entrenched in a particular way of working. It was a cultural problem rather than a technical one, but the growing need for a more rapid and predictable delivery of change to ecommerce sites brought matters to a head.

DevOps is all about bridging the cultural and technical gap between different teams so that practices such as Continuous Integration (CI) and Continuous Delivery (CD), which demand a high level of cooperation between teams, become possible.

Done well, DevOps represents a transfer of effective practices and methods between operations and delivery teams, infecting Ops people with some of the best development practices such as source control, versioning and issue-tracking, and automated processes like testing. In turn, operations teams give insights to the delivery teams about concerns such as infrastructure monitoring, immutable architecture, real-time alerts, configuration and integration management, and the requirements underlying high availability, delivery and hosting.

In some cases, it may involve operational people working ’embedded’ within delivery teams, or developers working on operational tasks, but it doesn’t require it. The focus is on the exchange of ideas and expertise. It does not tackle the sometimes conflicting concerns between governance and delivery or operations, nor does it encompass the very earliest stages of planning a database, or the mature system that is no longer being actively developed.

Both delivery and operations have aspects of responsibility that are of little outside interest. Ops teams have a responsibility to maintain many processes, applications and databases that aren’t being developed in-house, or aren’t being actively supported at all. Delivery teams have responsibility for implementation detail that is of no interest to either operations or governance. DevOps is relevant where the interests of delivery and operations meet.

The difference between DLM and Database DevOps

How does the DevOps movement, as it applies to databases,  differ from Database Lifecycle Management (DLM)? The short answer is that database DevOps focuses on part of the database lifecycle; the intersection between Delivery and Operations. It is, to be frank, a very interesting part, but DevOps for the database doesn’t aim for a broad view of the whole life of a database. Database DevOps is more concerned with enabling good co-operation between Development and Operations to facilitate the rapid delivery of database changes.

DLM, on the other hand, aims to make sure that the development, use and replacement of a database system achieves a measure of quality and scientific method, rather than relying on individual heroics. It encourages teams to evolve their way of working from the chaotic to the managed. It sheds light on all the necessary activities of providing a technical service to users, even those that are of no interest or concern to the vast majority of developers.

Is DevOps enough?

DevOps is a key to the effective delivery of an application, and is particularly useful in moving from big intermittent releases to Continuous Delivery.

It makes it easier to introduce version control for all aspects of hosting applications in organizations. It is likely to make applications easier to support and maintain, through making developers aware of operational requirements earlier in development, and by ‘shifting-left’ many of the test, security and compliance chores that were customarily postponed until the deployment pipeline.

However, it really needs to go hand-in-hand with a new spirit of cooperation and culture-sharing with the governance aspect: otherwise, we are in danger of delivering the wrong functionality more quickly and efficiently.

DevOps and the database

There was an idea that gained a lot of traction when DevOps was maturing; the database, like the application, was ‘just code’, and should be treated as part of the application. A database, in this way of thinking, is just a slightly weird component of the application that can be coded, built, tested and deployed alongside the application code.

The truth is rather more complex. It is true that a build-script can be generated for everything within a database, and these build scripts can reconstitute the database. This is also case for both the data and metadata, the DML and DDL. However, database developers and DBAs use a number of ways, including Entity-relationship diagramming tools, wizards, or interactive table-design tools, to develop a database, and the code is often generated retrospectively. Each build must bring all the development and test databases in line with the current version, whilst preserving the existing data. Where changes affect base tables in ways that change the data, this requires a migration scripts, or changes to the data import process.

There are also server-settings that either affect the way the database works, such as server configuration, or are part of the database, such as agent jobs or alerts. There are database settings too that can be enshrined in code, such as file locations, that have to be changed for the server it has to run on. In short, it is easier to work in isolation with application development than with database development: the latter is always a team-based activity.

Another problem for businesses grappling with the issue of evolving applications rapidly in the face of competition is that relational databases aren’t amenable to rapid change once they have become established. Relational database practice assumes that the data is well-understood before the database is designed. Changes to the table schema of a complex database can be tricky, not only in the design changes and corresponding changes to every dependent database object, but also in the data migration. Databases don’t easily fit the Agile paradigm, though Agile does acknowledge the need for a good understanding of the business domain before development starts.

Initially, NoSQL offered the tempting hope that, by being ‘schema-less’, one could get all the advantages of relational databases without the pain. Sadly, they had forgotten that RDBMSs provide ACID-based data and transactional integrity, and these qualities couldn’t be fudged by promising ‘eventual consistency’. NoSQL databases were fine for the type of data for which they were developed but were dangerous for transactional operations. Some spectacular Bitcoin Exchange collapses convinced even the NoSQL die-hards that transactionality was required for any commercial activity.

Microservice architectures, developed from the ideas of service-oriented architectures, also offered the hope of avoiding the need for an enterprise-scale RDBMS to manage the transactional integrity of business processes. Such architectures were seen to make even fast-evolving applications dependent on an institutionally-oriented operational culture. Here again, the problems of managing distributed transactions reliably had been grossly underestimated. However, where transactionality isn’t an issue, this architecture can cut down on interdependencies and therefore make development easier.

The most promising solution to this problem of how to accommodate fast-developing applications that need to deploy continuous changes to a changing market, is to understand the nature of the data involved. This makes it possible to use Graph databases, document databases and other specialized database systems for unstructured or semi-structured data, relational databases for transaction processing, and OLAP/Cubes for business intelligence. In other words, a heterogeneous data platform is adopted, rather than a single platform.

Is there such a thing as a ‘Devops’ tool?

Before DevOps, there has a temptation in the past for some of the major players in the industry to take the opportunity of a change or initiative in approach to software development to  tie users into an all-encompassing solution to the entire application delivery process. The Godzilla IDE.  This would include the obvious components such as build servers, source control system, bug tracking, provisioning and so on. Rather than try to integrate existing applications, behemoths were created that solved the problem of integration and communication by not having to communicate at all, as theirs was the only necessary application.

Ordinary people working in IT were even less inclined to buy that dream than senior management. Over the past two decades,  software tools have become more specialised and component-oriented. Delivery and Ops people demanded that software had to work together, not only on Windows or Linux, but across platforms.  A delivery team and operations team needed to be able to pick and choose from a whole range of specialist tools and had to rely on them working together in a CLI-based  ‘toolchain’ that was then able to automate an entire process such as building, testing, packaging, releasing, configuring and monitoring. For network intercommunication, components can expose their services as  Representional State Transfer (RESTful)  services. The accent is on simplicity and ensuring that tools that support development processes they can, where necessary,  conform to the demands of the toolchain.  The groundswell of DevOps became concerned with solving problems with tool-chains, often open-source, where each tool, of specialised purpose, could take their input from the previous tool in the chain and pass it on to the next. It meant that tools needed to co-habit, rather than dominate. Microsoft were late into this change in the way of managing processes, but with a fairly clear field and little need for backward-compatibility, were able to introduce a scripting technology that was not only able to provide the toolchain, but to allow objects rather than documents to be passed. Not only was it close to the old Command-line batch scripts, but also took inspiration from Bash scripts.  Windows DevOps tools tend to be PowerShell-aware, or even shipped as PowerShell CmdLets, but they can be useful participants with a Command-line Interface (CLI)

Can a relational database be Agile?

The short answer is yes, but it requires some compromises in the way that data is accessed from front-end applications. Agile relational database techniques are well-established but rarely practiced.

Rapid deployments need to be as risk-free as possible. By default, a new release exposes the latest functionality to all users immediately. However, if it is possible to preserve the functionality of the original interface intact and expose the new functionality only to a few users whilst allowing immediate rollback, then this risk can usually be reduced to the point that it doesn’t affect the business significantly.

Where the delivery team are working on the deployment alongside the Ops team, it becomes easier to manage an operation that requires a lot of knowledge of the system.

Databases have a particular problem with this approach: the need to be certain of preserving all the changes to data in a controlled delivery. It requires that the database is designed in a particular way to achieve this, and relies on the same discipline in insulating the actual database base tables from the application(s) behind a defined interface at the database level, in much the way you would between application modules. Base tables do not represent a satisfactory interface!

There are some features in the SQL Standard that lend themselves to this type of Agile development such as Feature Toggles and feature routers. Some aspects of SQL Server are especially useful, such as table-valued functions and synonyms. It is perfectly possible to expose different versions of the database to different users whilst keeping the underlying data up-to-date, and to change the version of the database exposed to a particular user merely by changing the users’ windows group.

This sort of release needs a lot of testing, and defensive development. It is best to use feature-switching techniques that can be controlled simply by making changes to user access, so schema-based security is ideal. The downside to this is that schema-based access requires considerable re-engineering of an existing database. Feature-switching via code changes are ill-advised. The worst mistakes come from switching features via global toggle variables, especially if they are recycled.

Summary

There has always been a gap between good practice and organization, and the conventional structure of the average IT department. In terms of the lifecycle of the application and database, organizational convention often dictates a strange wall of non-communication, sometimes even non-cooperation between the delivery activity and operations. This has never been sanctified or condoned by any known methodology and gets in the way of delivering updates to an application or database.

To be sure, a delivery team is focused on their project, whereas an operational team has a wide-ranging responsibility for the whole range of current applications, databases and processes, and the way they interact. They are very different perspectives. This means that cooperation and the exchange of ideas and techniques is valuable but the two teams can only merge into a single team in the unusual case where an organization has just one application that requires operational support.

DevOps is an initiative that comes from frustration felt with an organizational system that has evolved in an adaptive way. It stems from individual professional people caught up in the resulting confusion. It makes it easier to introduce many of the techniques that underlie the best DLM practices, especially the delivery pipeline and database provisioning, and it focuses on part of the database lifecycle. It is not concerned with the entire sweep of DLM, and has little to say about governance or the operational support of databases that are no longer being actively developed in-house.

It has, however, been a very positive influence in making it possible to take part of the database lifecycle, the delivery pipeline, and encourage both good practices and new initiatives in reducing the time it takes to deliver change.

DevOps, Continuous Delivery & Database Lifecycle Management
Go to the Simple Talk library to find more articles, or visit www.red-gate.com/solutions for more information on the benefits of extending DevOps practices to SQL Server databases.