Developing and deploying database changes can be a complex task, made more challenging by the fact that development teams need to move fast, while also protecting an organization’s crown jewels: its data. Speed of delivery and protecting data can often feel incompatible, but there are industry-proven database DevOps practices that bring them together in harmony.
Across each of these five key practices, there’s a theme of removing barriers and cognitive load for teams; but crucially, they are also putting safeguards in place to reduce the risks to production environments.
1. Automate, automate, automate
Developing and deploying database changes can mean a lot of tedious, manual and time-consuming tasks that cut productivity and sap morale. Automation overcomes this; it enables teams to get more done, and faster by driving repeatable, predictable deployments. And that reduces the risk of downtime and data loss, while improving delivery rates and fostering a happier team environment. Nobody likes surprises at deployment time…
So, what’s the one area of automation that is most impactful for deploying database changes? Continuous integration. It empowers development teams because they know that as code changes are committed, they are automatically validated for potential clashes. The impact of CI is significant, and as Grant Fritchey, Redgate Advocate and Microsoft MVP, urges:
“All I ask of you is to set up CI. Whatever code you’re working on, make the changes and check it into source control, and let CI do the rest.”
2. Shift-left testing to catch bugs earlier
The later data-related issues are caught in the development cycle, the more expensive they are to resolve. If data-related issues make it out to customers, it risks the company’s reputation and further slows getting new features and updates out to market.
“If developers run database tests as well as code quality and coding standards checks, before even committing the code to version control, then it prevents problems ever reaching the main branch of development.”
Time saved upfront has enormous productivity gains for teams – it avoids the waste of re-work and hours spent debugging and re-testing releases. Over time, it also contributes to the reduction of technical debt, yielding further gains for productivity and developer morale.
So where to start? At the heart of your shift-left strategy is the requirement to make your non-production environments, even your developer environments, as much like production as possible – but without compromising compliance. This means your tests, even before they are committed, are run in a more accurate environment against life-like data sets. This also means that as your database changes are finally deployed to production, you’ve got a greater chance of deploying successfully, without any outages.
How do you make sure your development teams have access to production-like data, when they need it? Take a look at practice #3.
3. Empowering your development teams with the right test data at the right time
Making the right test data available at the right time is especially important for teams who need to run integration, unit or performance tests that require several copies of the data, or where a succession of tests needs to be run that affects the data or schema. These sorts of test, which can be time-intensive to plan and set up, are so much more achievable when teams have high quality test data on demand.
But what about compliance? Having the right test data to hand also means ensuring that it has been correctly sanitized to avoid the risk of exposing sensitive data. The optimal approach resolves both on-demand availability of test data with the need for protecting data privacy. As James Murtagh, Redgate product manager, explains:
“One solution is to combine data masking and data virtualization so that development environments can be refreshed with a trustworthy copy of the latest production dataset, void of any PII, quickly. These lightweight copies, or clones, also ensure efficient use of disk space, and mean that test data is delivered reliably in the same way, every time.”
Empowering teams with the right test data at the right time, not only yields higher quality releases, it also gives teams confidence to develop and deploy changes more often. And the more confident teams feel about releasing more often, the more innovative their changes can be. Enter practice #4 as a mechanism for driving experimentation and innovation in a safe environment.
4. Ephemeral Dev and Test environments
Organizations are increasingly seeing the benefits of ephemeral or ‘disposable’ environments for driving more agile development and testing approaches. The problem with static, shared database environments is that they’re expensive to maintain, are often underutilized and contain out-of-date data. Temporary, isolated environments, on the other hand, make efficient use of resources, and provide fast, self-service access to the latest data. They enable development and testing to be done without worrying about impacting other team members or production systems. This allows for frequent, rapid cycles of database development and testing. For development teams who need to develop and test database changes, there are four benefits that raise team productivity and the appetite for innovation:
- Standardization: teams work with standardized environments for greater reliability and security.
- Isolation: team members can spin up a copy just for themselves, meaning no more tripping over other team members or causing conflicts with other environments.
- Cost-effective: environments are created when needed and destroyed when not in use, making efficient use of resources and reducing the cost of underutilized servers.
- Scalable: teams can quickly scale up to meet testing demands while reducing operational costs by automatically destroying environments when they’re finished.
As Grant Fritchey, Redgate Advocate and Microsoft MVP, says:
“The whole goal here is to move fast, but also allow the development teams to safely experiment and break stuff in a place where it doesn’t affect Production.”
Automation enables a far more agile and flexible approach to provisioning test instances, and frees DevOps teams from manual processes that are a drag on innovation. In the same way, automation can unlock time previously spent on other repetitive tasks such as documentation. Find out more in practice #5.
5. Ease the burden of documentation with automation
Preparing documentation, whether for knowledge sharing, onboarding, or for compliance purposes, is typically time-consuming and laborious for teams. But automation can ease much of that burden because , by its very nature, it requires certain standards and processes; and this means that documentation and audit trails are created. Every piece of code you write goes into source control and, if you commit the change, it triggers a CI build which is run independently of your development team. As builds pass through QA, Staging and then finally to Production environments with the help of CI, audit trails are created. As Grant Fritchey, says:
“You can hand these to an auditor and say this is how changes get to production, here are the protections and security mechanisms around those changes, and here are the tests that validate them.”
By creating an automated process, documentation becomes significantly less time-consuming and error-prone. Even the simple act of committing database code to version control initiates an audit trail of who made the change, when and why. And a well-documented change management process benefits not just your audit requirements, but also development and QA teams, even the wider business. It’s an easy win that can be gained as a by-product of standardized development processes, version control and continuous integration.
Was this article helpful?