To many people, Database DevOps, means automating the processes involved in the provisioning, development and release of databases. This focus on automation is important, of course, because it allows us to deliver releases to our users reliably and frequently. Automation becomes more and more important as the number of releases increases, because it speeds up the processes, by taking out the delays inherent in a manual process, and it also makes them more reliable. We minimize the risk of human errors that can cause delays in the release process, and can also run complex jobs overnight, outside working hours. It liberates team members from being immersed in routine chores.
Of course, though, there is more to DevOps than automation. It’s about helping your technology teams, wherever they are in the business, work more collaboratively towards a shared goal. It’s about building quality in, having smaller, more frequent releases, and getting feedback fast. It’s about experimenting, shifting-left to find problems early, and providing value to your users. It is about building systems that are intrinsically secure and being responsible about data privacy. It is also about making sure that processes are as visible across teams as possible, to make cooperation easier and to ensure that there are as few surprises as possible.
So, what is Redgate’s end-to-end Database DevOps solution?
Redgate tools make it easy for you to include your database in your DevOps processes, so that the development, versioning, release, and on-going monitoring release accommodates the special requirements of the database. We have tools to aid collaboration, improve productivity, and make it easier to get feedback about your application, once it’s running in production. We also have tools that will mask and pseudonymize any sensitive or personal data, before it leaves the production environment to be used in development, testing or analysis, because any data provisioning process must comply with relevant legislation to protect sensitive data.
Standardized Team-Based Development
Increase productivity when working with databases
SQL Prompt is one of my favorite tools. It’s best-known for its advanced IntelliSense-style code completion features, but there is a lot more to it than that. Did you know that, for example, it also has SSMS tab management features that will ensure that you never accidentally lose code, execute code on the wrong server, or warn you if you’re about to execute an
DELETE without a
WHERE clause! These can be life-savers, but the features I rely on the most are those that allow me, and the team, to be more productive, and to introduce standards and consistency into our code. Here are the main reasons why I like it so much, from a DevOps perspective:
- SQL Code Completion and IntelliSense – SQL Prompt leads me interactively through the syntax of SQL commands and queries, as I type. It will auto-complete my
SELECTclauses with schema-qualified table names. It will perform wildcard expansion for
*queries, it understands the
KEYrelationships and suggests
JOINconditions automatically. It will also help fill in the
BYclauses, and more. Without Prompt, writing SQL just takes a lot longer, and I make more mistakes.
- SQL Code Snippets – Prompt snippets perform a similar function to SSMS templates, but are easier to use and much more versatile, since they provide replacement points for various parameters. Prompt has a built-in library of snippets that will save you a lot of time when running simple but very common operations such as typing
BEGIN/ENDblock (be), or
TRANSACTION(bt), or when creating a new procedure (cp), altering one (ap). Teams can also create custom snippets for frequently used code, such as the code to build a table, or alter a table, allowing them to introduce standards and consistency to the way they build table and modules. They can also develop and share standardized snippets for testing code or investigating slow-running code; the snippet to retrieve query plans and execution statistics from Extended Events is a great example.
- SQL Code Analysis – Prompt’s static code analysis rules catches problems while you type. This includes making sure that you’re following best practices and naming conventions, not using deprecated syntax, and more. This minimizes the number of ‘code smells’ that creep into the application and database builds and so improves build reliability.
- SQL Code Formatting and Styles – SQL Prompt code styles allows me to format SQL to my own, or to my teams’, standards in a flash. This standardized formatting for code, in source control, makes code reviews so much more efficient, and troubleshooting of someone else’s T-SQL scripts faster and easier
This is a great, and free, little tool. It allows you to find text strings within SQL code, or object names in your database, or across several databases. You can limit your results to certain objects such as tables, views, stored procedures, or a combination of these. It will help you understand dependencies, or search for comments. My favorite part is that once you find what you’re looking for, it’s super-easy to jump to that object in the Object Explorer.
Dependency Tracker lets you create a diagram that visualizes dependencies within a database. If you are making changes to a SQL routine, for example, it is important to know what database objects reference it. You can check this very quickly. This is also a great way to quickly understand the impact of changes.
Automated Database Provisioning
Database Provisioning is a common chore in database development work. Every development initiative needs, for example, to keep a test cell maintained with the current working build. To do this, there must be a fast, reliable, secure and automated way to provision non-production SQL Server instances with a copy of the current version of the database (or indeed any version), and then fill it with the data needed for development and testing work.
There are many ways to do this, depending on your requirements. You could simply run the latest database build script, from source control, and then load standard data sets for testing, using scripts or a tool like SQL Data Generator.
However, if the database is large, and the team need to test with realistic data, in realistic volumes, then to recreate it on demand, in seconds rather than minutes, requires a different approach, such as SQL Clone, or SQL Provision.
SQL Provision integrates SQL Clone and Data Masker for SQL Server, to provide fast and safe distribution of database copies from one central location, to many non-production environments, without blocking the team’s development and release processes.
SQL Clone allows us to create copies of even terabyte-sized SQL Server databases in seconds, using a fraction of disk space. We all know about how much more data we’re processing and storing, so this alone can represent a substantial cost saving.
SQL Clone builds one full copy of the source database, called an image. From that image, developers can create multiple clones very quickly. You can think of a clone as a virtualized database copy that functions just like a normal database.
This is great if you want to create a copy of the database for each developer, so they have their own dedicated environment. We call this their sandbox. This allows them to make changes, take risks, make mistakes, and not impact anyone else on the team. It also saves developers from overwriting each other’s work and potentially losing work in development.
Another great use for SQL Clone is to automatically spin up multiple environments during automated test runs. This can really help reduce feedback loops, so you can parallelize test runs and find issues faster. Certain types of database testing, such as unit testing, are well-suited to use of small, standard data sets, However, the QA team will struggle to do effective testing if data is old or doesn’t reflect accurately the characteristics and anomalies present in production data. Likewise, use of realistic data during integration tests will help the team ‘shift-left’, find data compatibility issues earlier in the release pipeline, and reduce occurrence of data-related bugs.
I’ve worked with teams that deploy their software to customers’ sites and have to support SQL Servers on different versions. So, using SQL Clone, you can spin up multiple copies of the database without worrying about disk space to get more thorough testing across the SQL Server versions you support.
Data Masker allows the team to devise a repeatable, automated process that, ensures that the data is masked and obfuscated before it leaves the secure environment of Production. This, in conjunction with encryptions techniques, will ensure that the data is always secure, and can’t be used to identify the data subject. I’m sure this is concerns many of you because it relates to GDPR, HIPPA, and SOX.
SQL Data Generator
This is a simple, though versatile, way of filling your database with realistic data. It will allow you to produce a pseudonymized copy for development work. You can even automatically build-and-fill multiple development databases using SQL Data Generator and PowerShell.
You can do many things that are impossible with Data Masker, such as generating bigger versions of the same database, so you can test for scalability. It is also essential when you’re testing a database before it is ever released, and so has no data.
Source control systems ensure that the team always have a record of who changed what, when, and hopefully why (provided the person that made the change enters a good comment and if not, maybe you can track them down). I love that this can be tied back to work items, user stories, or bug ids so that you can easily trace this back to why the change was made.
SQL Source Control helps you get your database changes into source control, and then follow a best practice to deploy from source control. As Grant Fritchey says, “if it’s not in source control, then it doesn’t exist“, which means it won’t get deployed to other environments. This makes it much easier for database developers to adopt the good habit of committing their changes to version control system as soon as they’re ready. This can trigger an automated database build process (see later) that verifies that no change has ‘broken’ the build, and that the team’s suite of unit tests continues to pass. You can also tie this into a code review or Pull Request review process so you’re building quality in and shifting left to catch problems early.
Redgate currently offers two options for getting your database schema and static data into source control. Both will let you see how each database object changes over time. They integrate with the most popular of the existing source control systems, which is nice because your organization is probably already using one) for its application development, such as Git, Team Foundation Server, Azure DevOps Services (previously Visual Studio Team Services) or Subversion.
SQL Compare and SQL Data Compare (Provisioning)
Although SQL Compare and SQL Data Compare are usually considered as being ways of determining the differences between two databases, they can also generate scripts for synchronizing databases. If you compare with an empty database, it will, if course generate the script to build the entire database. Moreover, SQL Compare considers Source Control or a script directory to be a database, so it is easy for SQL Compare to generate a build script or migration script very easily by comparing a script directory with an empty database.
SQL Source Control will allow you to do a build, but if you use SQL Compare and SQL Data Compare alongside SQL Source Control then you can create migration scripts between versions, which will allow you to revert your changes in development. You can also generate scripts that can be used as a great starting point for a rollback script that rolls back your changes in Production, if required. I say, ‘starting point’, because you want to be careful about rolling back changes in Production to make sure there’s no data loss. It is also important to understand what the impact to your system would be. Depending on the problem and your situation, you may need to consider other options, like restoring a backup, rolling forward, or maybe even taking some down time in the worst-case scenarios.
Finally, another thing to consider is server or database configuration changes. Luckily, I don’t think these changes happen that often and they may be controlled more by DBAs then developers. I highly recommend that these database changes be scripted and stored in your source control system. That way they can also be part of your release pipeline and help to keep your environments consistent to avoid problems. Server or SQL Agent settings should be with the server details in your configuration management system. These are also used with SQL Clone to customize a cloned database.
Automating Database Deployments
The toolbelt provides several tools that will help you validate, and improve the quality of your database builds, as well as to run automated tests, and get feedback about integration problems fast when they are fresh in your mind, and so easier to fix. It also helps to move to smaller release chunks.
A great way to improve database code quality, during software development, is to run unit tests, in conjunction with a code coverage tool. The aim is not only to test that your database behaves as you would expect, but also that your suite of tests gives explores all possible use cases and workflows for your code.
SQL Test is an add-in for SQL Server Management Studio that you can use to create and run tSQLt unit tests against your database code. These database unit tests allow you to check table structure and views, to check that the right data is inserted/updated/deleted when calling stored procedures, and that functions work properly. You could even create tests to check permissions. It’s also important to check any business logic that might be written in your stored procedures.
Each team member can use SQL Test in their sandbox, to create and run the unit tests that prove that each individual component that they build or alter works as expected. The team can then repeat the entire suite of unit tests, plus others, as part of your Continuous Integration process (covered shortly), to make sure that a change to one component doesn’t cause others to fail. You can use it to ensure you’re not causing any regressions with future releases.
tSQLt is a powerful open source TSQL testing framework. It allows you to do test setup and teardown processes. You can use this to specify and cleanup test data. You can also fake tables to perform tests and then get back to how the data was before the test started. It’s also super easy to create test suites. These allow you to organize your test into different groups. There may be some tests that you want to run on every build and other tests that you might want to run nightly. Or, you might want to run different test sets on those different database clones (as discussed earlier) to help you parallelize your tests so you get results faster.
SQL Test also incorporates SQL Cover, an open source SQL code coverage library, written by Ed Elliott, another Friend of Redgate. SQL Code Coverage results can also be seen in SSMS before committing changes to source control. This article describes the basics of how SQL Test’s code coverage can help uncover the darker, untested paths in your database code.
SQL Change Automation
SQL Change Automation (SCA) provides a consistent and repeatable way of automating as much as possible of the database build, test, integration and deployment and release processes.
SCA can be used as a standalone set of PowerShell Cmdlets from the command-line interface, or as a plug-in to Visual Studio. It also has plug-ins for all the major build and CI servers, including Azure DevOps Services, TeamCity and Jenkins, as well as release management tools like Octopus Deploy.
The DatabaseBuild components of SCA take as their input your database, as it’s represented in source control, validate that that it is possible to build from it without errors, and produces the build artefact, a NuGet package, containing the validated database source. We can also document the validated database, using the New-DatabaseDocumentation cmdlet (which uses SQL Doc under the covers), as well as rerun tSQLt tests at build time.
The team can also run their set of code analysis rules as part of the build, using SQL CodeGuard, to fail the build and make sure issues are fixed (or at least reviewed) before they make it into your Production environment. This is a very exciting new area for us and again something that people have been doing with application code for years.
You can create your code analysis policy (i.e. the configuration of code analysis rules for which you wish to check) in SQL Prompt, and then at build-time run the same set of rules automatically using the SQL CodeGuard command-line.
SCA keeps all the components of the build together as files within an immutable zipped directory of a standard format, called a NuGet package. A tool such as Octopus Deploy can deploy the NuGet package across a network, to update any target database so that it matches the version described by in the package.
Instead of scheduling the build to run at regular intervals if a change has been made, such as an overnight build, we can build on every commit. When we do this, the automated build is triggered, validated and tested every time a database change is committed to source control. To do this, we can use SCA’s CI plugins to set up a database CI process. Quick feedback as part of a CI process to find problems quickly and get them fixed quickly too. This also allows you to build up a suite of unit tests and regression tests that will give you more confidence in your releases, which is important when your team is releasing often.
To keep CI builds fast, build up a good suite of unit tests for your database and smoke tests that give good coverage. If your tests fail, then it fails the build, and CI systems are set up to notify the team or show on dashboards, so everyone can address the problem quickly while the changes they made are small and still fresh in their minds. Once you’ve completed unit testing, it’s easy to exclude the tsqlt schema from your build artifact so that these extra objects in the database do not get released to downstream environments in your pipeline. You will then do your regression testing and integration tests to make sure that the Database system as a whole is working together.
You will want to schedule additional builds that run on a nightly basis that have additional integration testing that are more thorough because they check that all the components work together to perform business processes, and so but take a bit longer to run.
Deploying the validated build
Using SCA, the validated build artifact can be consistently deployed to other downstream database environments defined in your pipeline, such as Test, UAT, Staging, Security checks, compliance and ultimately Production. The whole process can be logged for traceability/auditability
SCA will take the validated (and hopefully tested) build artifact and use it for deployment. Once created, the build artifact can be used repeatedly, but it cannot be changed. The whole idea is to build once and then use this artifact to release the build to other environments by using synchronization. This provides a consistency as you deploy to your other downstream environments and makes sure that nothing is forgotten accidentally.
Synchronizing an existing target database
When you need to update an existing database, SQL Change Automation allows you to test the upgrade of that database, from one version to another, whilst preserving the data held within it, instead of just building it from scratch. The tests are run on the database once the synchronization script has been run. This is a sensible way of testing the deployment script that is run to update the target databases from one known version to another.
When working with SQL Source Control, which uses a state-based approach, SCA will generate a deployment script from the object-level source that builds the database to a specific version. Once it is tested, this script is used for the release. Once it’s generated, tested and approved by the database developers, it is immutable. This means that anyone reviewing the script will be looking at the exact script that will run on the target.
A nice feature of SQL Change Automation (SCA) is that it can check if the target database has drifted from what you expected the upgrade script to run against. If this is the case, the release can be stopped so the team can get involved in understanding these differences and if it’s still safe to run or not. It’s probably worth doing another dry run at this point, which should be easy to do with your automated process that is setup.
With the migrations-first approach, such as when working with the SCA plug-in for Visual Studio, the script is generated, updated if needed, and reviewed at development time. This gives the developer a lot more control over the update script, to ensure that it can handle complex database refactoring and updates to transactional data without destroying relational integrity. This is the script that is committed into source control and used throughout the entire pipeline, so it has been rigorously tested along the way and is completely repeatable.
One of the good things about DevOps, is that as you start to release more frequently, releases don’t become this big scary thing anymore. Since it’s automated, the risk of manual error is removed. Also, since you are releasing more often, the changes are usually smaller and if something does go wrong, it might be easier to determine what happened.
Your database releases can move through the pipeline completely automated as the checks and tests for each stage pass, or you can setup a manual review step to have someone, or several people, check and approve before the release, before moves on to the next deployment environment. Release management systems provide this functionality and you can specify who needs to approve. It could be a specific person, people, or even a person or a certain number of people from a group. In DevOps, we talk about shipping value to our users, so we need to build this quality in and maintain our users’ trust.
Our tools integrate into existing Release Management systems such as Azure DevOps Services/Team Foundation Server (TFS), Octopus Deploy, and more. These tools allow the reviewer to see what the changes are, if there are any warnings about the deployment, and what the actual SQL script is that will run on the target database. It might also be worth looking at how long the release took in a previous environment, especially if that environment is similar, so the reviewer understands how long the release might take to run.
I highly recommend a Staging or Pre-Production database as part of your pipeline that is as close as possible to Production. It could be a locked down environment that restores a copy of the most recent backup, or a masked clone of production, using SQL Provision. This is the best way to test your deployment script before you go to Production, which is helping you to shift-left and find any issues before they would impact your users.
Release Management systems also allow you to schedule a release. So, you could manually review and make sure everything is ok in a Staging or Pre-Production environment during your normal working hours and then schedule for the release to Production to happen overnight when most users are off the system, and more importantly, you can be sleeping. Of course, make sure alerts are set up so that someone gets notified if anything goes wrong.
SQL Compare and SQL Data Compare (Deployments)
SQL Compare and SQL Data Compare are general purpose database comparison and synchronization tools that can be used for some deployment tasks. They can be used to understand the differences between database in different environments, or between source control and a given environment. They can also be used to deploy schema and static data (reference data) from source control through your environments. As discussed earlier, it’s a best practice to always deploy from source control.
SQL Compare will raise warnings if a synchronization script it generates could cause data loss, and the team will need to design how to modify the script to avoid this.
If you ever find yourself making a hot fix directly to Production, make sure that you incorporate this change back into your source control system, so it’s not overwritten the next time you do a deployment. It’s not good to re-introduce a bug you or someone else just fixed, especially if it was a hot fix on Production. That means it was probably bad and you wouldn’t want to regress back to that situation.
DLM Dashboard, another tool from Redgate, can also help with this and we talk about this below.
SQL Data Generator (Performance and Load Testing)
Here, as part of the test phase of the CI process, we are testing the performance of the build, and how well it scales with load, so that you don’t find problems later in Production or at peak times. This includes running with different data loads to check on scalability.
We can use SQL Data Generator to generate large, realistic data sets for your tests. This is great for greenfield (brand new) systems, where your database is virtually empty to begin with, but is handy with any database development because the data is all fake and requires no special security regime. You can specify how many records the different tables will have, and generate a good set of data to test against. It is very easy to store several data sets of various sizes to allow you to compare performance of different builds with the same data set. This will alert you of any slow-running queries introduced into the build.
Here, we are documenting the current state of the database as is represented by the build. We can use SQL Doc to automatically generate database documentation with every release. Some people think Agile excuses them of doing documentation, but documenting the different objects is vital, and if it can be automated (such as with SCA, as discussed earlier), then there’s no excuses for not doing it.
SQL Doc can generate documentation in Word, PDF, HTML, and other help formats. It allows for team members without direct access to the database, or new team members, to come up to speed with the structure of the database, along with what each column is used for and what type of data it stores.
SQL Doc generates documentation using the
MS_Description extended properties in the database, so remember to fill these in and add the results into the source code for each database object. Even better, write a unit test to check all the objects have
MS_Description extended properties filled in, so new objects can’t be added if they leave this blank.
Monitoring for Performance, Availability and Security
Once we’ve released the software, it’s important that we monitor the performance, availability and security of the SQL Server estate. This will reveal any how releases impact the performance of the production systems and databases and help ensure the smooth running and security of your non-production systems.
SQL Monitor is always running and monitoring your entire SQL Server estate helping you to be more efficient and more proactive (instead of just reacting to problems). It will alert you to current issues and help you resolve them. It also alerts you about potential problems that will happen in the future, so you can fix these before they impact your users.
The current release will help you to visualize when a release occurred on SQL Monitor’s timeline. This is cool because it makes it easy to see if a release is the reason why your server or database started having problems. Normally, this will be picked up in scalability tests if these run on standard datasets, but these problems can slip through to production. If you know when the release was done, then you can easily figure out the suspects, such as the routines that were altered in the release. This is easy to do because of the traceability and auditability that the Release management system provides. Now you know what changed and can figure out what you do to fix the issue quickly. This even works with SQL Compare if you’re doing manual deployments as you get started on your DevOps journey.
SQL Server estates continue to grow quickly, both in terms of the number of servers that must be monitored, and the size of the databases. They are also changing in nature, due to the ease with which new cloud-based, containerized or virtual machine-based SQL Servers can be provisioned. It’s more important than ever that you have a monitoring tool that can monitor activity across the entire estate, alert you quickly to issues, and displays the problem simply and graphically, in a way that a first-responder understands how to act. You’ll also need a highly controllable and adaptable alerting system, so you can tailor the monitoring strategy according to applications running on the server, and the data it stores.
For compliance reasons, it’s important you’re monitoring for security issues as well as performance or disk space management issues. With SQL Monitor, you can easily set up custom metrics and alerts so that you’ll be warned immediately of possible security issues, such as an escalation in permissions, or unauthorized changes to configuration settings, or of an external SQL injection attack.
Protecting and Preserving Data
Once we’ve released our database changes, we need to safeguard the data, to ensure that no-one (internal or external)-makes changes that we’re unaware of, or that haven’t gone through our official change management processes. We also need to back up the data, so that we can ensure that we can recover databases, without significant data loss, in the event of an emergency.
In rare cases where direct production hotfixes are unavoidable, we need be extremely careful that they are done safely and that the changes are back-filled immediately into the source control system.
DLM Dashboard (Drift detection)
We need to know about out-of-process schema changes and fix issues before they cause a problem for a release or block it
DLM Dashboard lets us visualize how releases move through the different database environments in the pipeline. I already talked about how SQL Change Automation will perform drift detection as part of a release, but DLM Dashboard can tell you about drift when it happens. Drift is when a database changes from the state that it’s expected to be in. This is because a change happened outside the normal release process. The most common example is when someone provides a hot fix directly to Production.
DLM Dashboard helps you to avoid surprises at release time. Whenever a drift happens, you can get a notification. If it’s a production hot fix, then again, it’s important to bring this change back into source control so it’s not overwritten on the next release.
Backing up your database is important, the goals being that in an emergency you’ll be able to restore those databases, within your Recovery point and time objectives.
SQL Backup provides a nice GUI to compress, verify, and encrypt SQL Server backups. It allows you to see all your database backups in one place and verify them. There’s also a command-line, so SQL Backup can be automated and integrated into your release pipeline.
Network resiliency features help make sure your backups complete successfully. Compression and encryption help you to save space and makes sure you’re backups are secure.
SQL Compare and Data Compare (Safer hotfixes)
When difficult problems occur on production systems that prevent users doing their job, it’s important to be prepared, so that you can respond very quickly. With automated deployment and release management processes, as discussed earlier, it might be easy to develop a fix the situation in your development environment and release this through the established, automated pipeline.
Sometimes, however, you may need to urgently roll back to a previous, working state. If you are lucky and you are allowed down time and your database is quick to backup and restore, I always think restoring a backup is one of the safest ways to recover the database to a previous sate. You may want to build an automated step into your pipeline that creates a new backup as part of the release before the upgrade scripts have run to make sure you have all the latest data.
However, if you really need to roll back via a scripted hotfix then SQL Compare allows you to quickly see what changed on your production environment compared to a previous version or compared to the latest backup. You can easily generate a downgrade script of all the schema changes and apply this directly to Production as a hotfix. Be careful to note any transactional data that may have changed during the deployment. Make sure your downgrade script handles what to do with the changed data or with any data that came in while you were in this problem state.
In the future, we may consider adding “down” scripts to SQL Change Automation. We’d love to hear from you if this is something you need.
How to get started
We like to represent our solution for the development of database applications as a continuous cog through the entire process and we have the tools that can help every step of the way.
So, what should you do first. That’s up to you; there’s no one way to do DevOps. Where’s your greatest pain point? I would reflect on that and start there. Some examples could be:
- If you and your team don’t understand DevOps, then you’ll probably want to take some time to learn more. Maybe there is a book you could read or a webinar to watch or a conference you could attend. You might even want to check out our quarterly SQL in the City livestreams or some of our past or upcoming webinars.
- If you are doing manual deployments and want to release faster and/or reduce the risk when deploying database changes, then start with SQL Compare.
- If you feel like your developers aren’t very familiar with the database structure, then SQL Prompt‘s IntelliSense is a great place to start to increase your productivity.
- If you want to be able to understand all the changes to your development environment, but still deploy manually with SQL Compare, then start with SQL Source Control.
- If you find that you need to support more frequent releases and you want help automating your database deployments, then look at SQL Change Automation.
- If you feel like you spend a lot of time manually checking your servers or that you’re having performance issues, then start with SQL Monitor.
- If you want to create better test data based on protected production data to find issues with releases sooner, then look at SQL Provision.
Wherever you start, know that you can start small and as you fix one problem, then you can find where your next pain-point is, and tackle that to continue to add value to your work. It might be best to start with one team to explore these areas, find some successes, and then share their story with other teams in your company.
Our DevOps tools integrate with the IDEs that you’re familiar with (SSMS or Visual Studio) and with your existing source control, continuous integration, and release management systems. Your organization may already have these in place for the application development, so there should be less work for you to research and install these other systems. We want to leverage the existing tools you’re using for your application and in some cases, it makes sense to include the database alongside the application code in the source control repository and possibly as part of the same release pipelines.
Another great place to start is at www.redgate.com/devops. Here, you can get higher level information for your executives and managers, take a maturity assessment to get some more insight into your current processes and what to do next. There’s also information about how to get in touch with our DevOps solution team who are happy to answer any questions you may have.
Was this article helpful?