As a DevOps Advocate at Redgate, I frequently work with Enterprise customers to help them transform their software development process for databases across their entire organization.
I don’t do this work alone: Redgate’s development teams have a strong customer focus. Engineers working on our DevOps solutions frequently join the Sales teams and the Advocate team on customer calls. This enables us all to understand the friction points that slow down or stall adoption in the real world, and to innovate changes in our solutions to fit the most effective workflows for our customers.
Here are the top problems I’ve found in Enterprises implementing database DevOps, along with the solutions which speed up adoption.
Problem 1: Managing the flow of releases is difficult when some changes take significant time to test, but production bugs must be fixed quickly
Database development and delivery works best when teams follow a process in which:
- Deployments contain only a small set of changes
- Changes are designed to be backwards compatible
- Changes minimize impact to the user and may be deployed with systems online
- Deployments occur frequently
- Lead time between development and deployment is as short as possible
While this is the ideal process, it sometimes impossible for a group to shift to this pattern for existing databases in an Enterprise environment. This may be related to having multiple teams with different priorities and timelines work on the same codebase, database schema containing complexities that make it difficult to decompose changes, requirements for grouped customer acceptance testing of features prior to deployment, limited deployment windows, and other real-world factors.
Although these teams can’t follow ideal patterns right away, it is still possible for them to implement devops processes that reduce toil, improve code quality, and help them continuously improve the way they deliver value to customers.
Solution 1: Enable on-the-fly provisioning of databases for development and test
In working with Enterprises, we have found that a key to helping customers who must handle some slow and/or complex deployments is functionality to reset and redeploy realistic development and test environments quickly. This is done by integrating centralized images and data virtualization (aka clones of databases) into the development process.
This capability allows customers to:
- Immediately create realistic development and test environments in the appropriate state to validate an urgent hotfix
- Quickly reset QA and customer acceptance environments and redeploy changes to them when a hotfix or another release “jumps ahead in line”
- Act with speed even when large databases are in use
- Ensure that sensitive information is de-identified or removed from datasets before they are used in development
Related research: the 2019 Accelerate State of DevOps Report from Google Cloud found that 72% of “elite” performers automate provisioning and deployment to test environments, compared to only 39% of “low” performers.
Our experiences in working with Enterprise customers around the world has led Redgate to providing multiple new enhancements for database provisioning in database development, including the abilities to use virtual databases as “baselines” and the ability to quickly and easily create virtualized databases for different branches.
Problem 2: Teams are hesitant to change processes related to legacy monoliths, but active development must continue on these databases
One of the greatest sources of friction in an Enterprise is fear and resistance to change. I have found that this fear of change tends to be highest regarding legacy monolith databases.
Many Enterprises have initiatives underway to reduce their dependency on legacy monolith databases. Some have taken on microservices initiatives, while some have more modest efforts to split out functionality into datastores with simpler dependencies. In either case, these changes can’t be completed overnight, and active development against the legacy monolith database must continue for some time into the future.
Fear around legacy monolith databases are often brought up early in DevOps initiatives: many assume that they will never be able to successfully implement DevOps processes with these databases and should not even try.
Solution 2: Design processes that work for both legacy monolith and greenfield databases
DevOps is not simply about speeding up development — DevOps is about increasing the flow of value. If your environment includes active development on a legacy monolith database, it’s critical to include that in the scope of your project. In other words, this is an area where you should invest in high quality changes.
The greatest problem in this area is fear. To help assuage these fears, Enterprises should:
- Identify a relatively low-risk database (perhaps a legacy database, perhaps a greenfield database) for initial proof of concept (POC) and implementation
- Involve representatives from teams who work on legacy monoliths as consultants in the POC
- Design workflows which will be extendable for the legacy database
This will likely not be as challenging as it seems to team members initially.
We have found that the same patterns which make teams successful in applying devops to greenfield databases also improve quality and collaboration for legacy monolith databases. These patterns include an early review of code changes via Pull Request workflows, the development of database unit tests for critical requirements and checks, and the use of automation to bring in database administrators for review on critical changes.
Working through these patterns in pipeline development and including those who work with the legacy monolith databases as consultants is key, however: it’s much easier to be confident about and become invested in a new process when one participates in building it.
An additional secret to fast adoption: the right kind of DevOps team
Some folks have a bit of a bias against centralized “DevOps Teams” in Enterprises. I believe this comes from some examples where companies mistakenly created teams to “perform” DevOps for everyone — which simply creates a new silo and bottleneck.
I have found that DevOps Teams can be incredibly successful at enabling transformation in an Enterprise, as long as the DevOps team has the mission to act as consultants who empower other teams. An effective DevOps team:
- Acts as a coach and a guide for engineers
- Advises on Proof of Concept exercises
- Identifies Agents of Change in teams around the organization
- Shares guidance and recommendations about tooling and workflows which have been established inside the company
- Helps connect teams with one another to share their experience and expertise
- Helps explore new technology and workflows which may be useful to teams around the Enterprise
- Partners with vendors to help teams get the most out of their licensing and all the resources available to them
I’m happy to say that I’ve had wonderful experiences working with contacts in DevOps teams at Enterprises around the world, and that the people in these teams tend to be smart, engaged, effective, and also fun to work with. DevOps teams truly can function as a positive catalyst within an Enterprise, saving time and money for all of the teams who they advise.
Was this article helpful?