Simple Talk is now part of the Redgate Community hub - find out why

What is Database Lifecycle Management (DLM)?

When the different teams that are involved throughout the life of a database fail to reconcile their different roles and priorities, and so fail to cooperate, or work adaptively, the result is gridlock: databases defined as though carved in stone rather than by code and data. DLM offers an alternative that allows databases to respond quickly to business change.

DevOps, Continuous Delivery & Database Lifecycle Management
Culture and Organization

Contents

What is DLM?

Database Lifecycle Management (DLM) combines a business and technical approach to improving database development (or acquisition), delivery and management.

DLM aims to ensure that an organization has in place the processes, methods, business systems and tooling that will allow it to design, develop and then progressively refine even the most complex databases more rapidly, and with less effort.

It acknowledges the paradox that databases are becoming increasingly complex and challenging to develop, in response to demands from businesses, but in spite of this, we must be able to change and adapt even complex databases  in response to  business change within the time required.

The scope of DLM is, as the name suggests, the entire useful life of a database, including aspects of design and data architecture, encompassing the database development and delivery processes, and through to managing and refining the database while it is in operation. It is intended to make all the processes that make up the lifecycle more predictable and visible, with the objective of reducing costs and increasing quality. It also aims to encourage innovation, and cooperation between teams.

DLM encourages the participants in a project to look at the processes that make up the lifecycle of any database, in broad strokes and in detail, to find ways of ensuring that the right things happen at the correct time and that every team has the best information for them to undertake their contribution.

Foundations of DLM

DLM is most concerned with improving the processes, or working methods, used in the design, development, integration, build, test, configuration, deployment and operational management of databases. DLM uses well-tried approaches to facilitating complex team-work to do this. It is less concerned with specific development tools and techniques. DLM also does not require adherence to specific development methodologies, although some methodologies will be able to exploit DLM techniques more readily.

DLM is concerned with providing the best chances that a database system or process can be improved, and that failures can be remedied rapidly. To do it, they must be

Instrumented: This requires that processes within the databases, must be easily measured for both performance and accuracy, and logged. Without a baseline, it is difficult to establish that it has been improved. Without detailed evidence, it is difficult to nip problems in the bud or track down bugs.

Resilient: The cost of any unreliable server-based system is reckoned in man-hours wasted and trade lost. As well as constraints and checks for system integrity, and defensive coding techniques, a resilient system has a well-planned hierarchy of warning and error systems with escalating alerts to ensure that stress and anomalies that warn of impending failure can be acted on well in advance. A resilient system is easily refactored and has a clear functional structure. There must be ways of monitoring excessive technical debt and preventing the deployment of failure-prone systems.

Managed: All database operations and processes must be managed. This implies

  • Visible: There should be no surprises. All the different teams involved in the lifecycle of a database should be able to check on progress, so as to be able to give timely advice, and to support progress. Source must be in a development VCS, deployment scripts, maintenance procedures and server settings in a CMS, and all used via automated scripts regularly, or if possible continuously, to deploy the ‘working version’ to the integration server.
  • Documented: It should be possible for development team members to join a database project without excessive initiation. All participating teams must document sufficiently, but not excessively, to allow all participants to contribute their skills and knowledge without ‘reverse-engineering’. Decisions such as the ‘when’, ‘why’ and ‘where’, as well as ‘what’ needs to be covered by documentation
  • Tested: Before a system is released it must be tested against the release criteria. Before deployment, it must be tested against the criteria established a priori by the governance activity in conjunction with the business, end-users and operations. DLM uses automated testing where possible but acknowledges the importance of manual testing.
  • Measured: Where possible, all teamwork and actions such as approvals, test, sprints, and phases should be measured in terms of time, cost and quality. Metrics from techniques such as static code analysis for technical debt, and performance (see ‘instrumented’) would be useful aids to support judgement on quality. With baselines and measurements, it is much easier to prove improvements and refine processes.
  • Repeatable: This means that, when they are repeated, the same steps can be repeated accurately in the same order to produce the same result. If a change must be made only once, it should be idempotent, in the sense that it has no additional effect if it is called more than once with the same input parameters. It also means that documentation, source, and archiving of information about the system is to sufficient quality that with an impending problem, urgent remedial action can be guided sufficiently for the ‘first responder’, rather than wait for the expert.

When a number of processes have been refined to a managed level, it becomes possible to employ, where appropriate, an iterative approach to development that allows a working database to be delivered in short cycles, and deployed incrementally to a production-like environment. This approach is helpful for a number of reasons, but is not essential in order to benefit from a DLM approach. It is a visible proof of having a managed process, but not necessarily an end-result.

DLM relies on automated scripting and third-party tools to improve the speed, reliability and repeatability of each process from initial planning, through development and production support. However, it not enough simply to improve the efficiency of individual processes. DLM also works to integrate critical processes within the database lifecycle. For example, in DLM, bugs reported to Support teams in production or test environments should flow directly into a central bug tracking system, which itself integrates directly with the version control system. When a developer changes the affected code in response to the bug report, he or she includes the bug ID in the code header so that the tester can verify that the change eradicated the bug.

Integrated processes and software tools by themselves are not sufficient to make DLM work. Effective DLM relies on very clear documentation of these processes, located on a central information hub for everyone associated with a particular database, to allow easier communication among those working on a database. It introduces and encourages workflow that is capable of conditional branches to make the review process easier to monitor, so that it becomes easier to integrate requirements such as regulatory compliance, resilience, data quality, production monitoring and downstream reporting.

Data is central to business change, so DLM requires organizations to change the way that the business works with IT. Instead of a project-by-project creation of data-domain silos, an enterprise should maintain a data architecture (aka Data Model) that documents, validates, classifies and verifies its data across the enterprise from source right through to reporting, and establishes a vocabulary for naming that it understood by all. Databases must be designed in a way that facilitates change. While DLM will improve individual working methods for databases in a way that benefits any systematic development methodology, it is most effective as a phased, iterative process because this encourages development cultures, and supporting processes, that facilitate change. It is no longer enough to assume that businesses are content to have databases that are unchanging in their design; Instead of current database applications locking the business data processes in place due to the difficulty of making changes, Databases must reflect the data architecture as it evolves, in the light of changes to the organization they support. This means that databases must be easier to modify.

What DLM Delivers

DLM is intended to make all database processes more predictable and visible, to improve the organization’s knowledge of the database and related applications, to identify opportunities for optimization and to support decisions within the enterprise. 

DLM will improve visibility of the database project to all participants in the database project, including:

  • The final customers of databases, whether end-users, sponsors or teams responsible for downstream processing and analysis, or other applications.
  • Application management and development teams
  • The project managers with the responsibility of allocating resources
  • The teams tasked with deployment, support, maintenance and decommissioning
  • IT management responsible for compliance, quality, strategy and architecture,

It will optimize communication both within and across teams, allowing earlier and faster resolution of any potential conflicts faster, and preventing the need to make drastic design changes late in development.

DLM supports the several participating teams by overcoming the challenges of managing workflow, communications, and dealing with the issues of disparate priorities, processes, metrics, and infrastructures. This is done to

  • Reduce development time
  • coordinate separate projects better
  • improve the collection and re-use of ‘domain’ knowledge and software techniques
  • give better predictability to all parts of the process.
  • increase quality
  • lower production costs
  • Drive team collaboration across development and operations teams

DLM aims to use automation, workflow and monitoring wherever appropriate to improve the time-intensive processes of build, integration, release, test, and deployment, in order to reduce the workload required and allow more frequent and more reliable delivery of changes and enhancements. DLM supports Database Continuous Integration and Delivery where it is required.

The DLM Landscape

The database lifecycle is usually referred to as

  • New (being designed or developed, or given the green light)
  • Emerging (in production but in pilot form)
  • Mainstream (in active production use, and being actively maintained)
  • Contained (in production only for limited or legacy use)
  • Sunset (scheduled for retirement)
  • Prohibited/Retired (no longer used)

DLM extends much further than just the development lifecycle, covering three major concurrent streams of governance, delivery, and operations throughout the phases of DLM. These are activities that may map to several different teams but, in smaller businesses, be managed by a one or two teams. A stream does not relate directly to a team, and many believe it shouldn’t do so because of the importance of coordination.

For a project to succeed, all these activities must be done right. If a project gets the initial governance aspects wrong, it is unlikely to provide much business value no matter how well it does the development and operations. Similarly, however successful a database project may be in tackling the problems of the development process, it is doomed if it neglects operational issues, such as training, support or providing enough resources to run the application reliably. Once again, the business value this investment provides won’t be as large as it should be. DLM techniques can provide a broad view of database applications to help organizations avoid problems like these

Governance

Delivery

Operations

New

Identify business needs and data architecture. Determine mix of bought-in, commissioned and internal development work. Identify business priorities for the database. If in-house development, work with development teams on the planning role for development. Identify all enterprise-wide data and processes shared by applications and plan data interfaces and feeds. Ensure the participation of all IT expertise in the organization for security, compliance etc. Agree a resilience plan with the Operations activity in conformance with the standards of the organization

Work with governance to plan out all development work sufficiently to allow cost estimation for development to the ’emerging’ level. Provide roadmap for iterative development. Execute all necessary development work, and plan out the detail of all supporting ETL and reporting systems and services. Ensure that development is a managed process. Determine all database requirements to comply with the legislative framework

Provide detailed plans for hardware platform, along with server and network configuration. Work with development team and governance to host the pilot version of the database. Advise Governance on the requirements of maintenance, monitoring, support and training.

Establish likely demands for scaling and evaluate alternatives Establishment of a CMS archive. Work with governance to make sure that the hosted production database meets DR plans and High Availability Service-level agreements.

Emerging

Work with the business, development team(s) and all ‘downstream’ consumers of the data (e.g. BI) to ensure that the database meets the ‘contract’ for quality and specification. Check that the database meets requirements for operational support and maintenance.

Regular or continuous deployments to deliver new and changing functionality. Work with Operations to Develop features to improve database instrumentation and resilience, and general ‘maintainability’

Training for users, and for support, maintenance and high-availability. Implement whatever replication, warehousing, high-availability, DR scheme is required. Monitor performance for index problems and poor algorithms. Work with governance on security issues

Mainstream

Check for compliance with the regulatory framework, and plan for changes that are a consequence of business change (e.g. Mergers or  acquisition) Check for compliance with corporate or company-wide data policies and standards

Deployments to deliver new and changing functionality in line with changes in the business and the legislative framework. Also, changes in consequence of database version and OS version upgrades

Work on ensuring that all likely points of stress or failure are matched with documented processes for correction. Monitor and test all DR strategies. Improve CMS archive to ensure all maintenance tasks are possible without special knowledge or heroics.

Contained

Continue to monitor for changes that are required as a result of security concerns, legislative and business changes

Deployments only for required changes as specified by governance

All operations such as maintenance and monitoring done as a routine, relying on documentation and CMS archive

Sunset

Ensure that all requirements previously met by the database that are still relevant are now met by other systems and migrations are in course. Continue to assess security risks

Provision of systems to provide any continuity in functionality that is required

Continue all maintenance  as a routine, relying on documentation and CMS archive

Retired

Plan for archiving of data

Ensure the retention and long-term archiving of all development source code

Checks that media can still be read while data-retention is required.

Governance

Governance encompasses all of the decision making, planning and project management for a database throughout its life, with the aim of making sure that the application continues to provide what the business needs. This role is likely to be supervised by different activities within the organization at different stages of the life of a database, from initially determining the business case, through to eventual decommissioning. Once a database becomes part of the portfolio of applications that are used by the organization, the governance activity then provides ongoing assessments of the database’s benefits, risks and costs and determines when revisions and updates are necessary.

Governance starts at project conception, identifying the business needs and objectives and continues to the end of the lifecycle to ensure that the requirements are met and the database application is then correctly maintained, reviewed, and updated as needed.

Planning

The planning role is part of governance.

The definition of the product requirements is based on the viewpoints and requirements of customer, company, market and regulatory bodies. A database specification is produced from which the database’s major technical parameters and architecture can be defined.

This has four important aspects.

  • Remember: Check previous projects to see where existing ‘domain’ knowledge and software techniques can be reused. Learn from mistakes in previous projects, and within the industry.
  • Innovate: The conception stage is the ideal point to check whether new frameworks, tools and techniques can reduce the delivery time, yet fit the requirements.
  • Specify: The requirements of the database must be documented, and the main processes identified. Organizations are required to identify and understand the key initiatives, pain points and business problems they are actually trying to resolve by changing the database. All upstream data feeds and applications, and downstream reporting and analysis applications must be identified along with any restrictions that are likely in movement of data. Before delivery can be attempted, there has to be a thorough understanding of data right through to its source, its nature, constraints and characteristics, along with estimates of volume and distribution. Is this structured or ‘fluffy’ data? What are its lifetime and retention requirements? What audit requirements are there? How mutable is it? What naming is in place within the users ‘domain’ to refer to the data?
  • Plan: At the conception stage, planning involves the creation of ‘architectures’, chiefly the ‘domain’ or business architectures, preferably modelled diagrammatically and given a name so that the delivery process can adopt naming conventions at the object or entity level. These architectures are then used to identify the logical areas of the database and to provide rough estimates of timescales and the likely effort required to deliver the database. There should be a plan for the overall design that is sufficient in detail to allow the delivery process to start. There should be sufficient High Level and Detailed use-cases to provide a foundation for documentation, training material, and test plans.

Delivery

Although delivery is split into design, development, release and deployment, it is possible with certain types of database, particularly when close-coupled to an application, to automate the bulk of the repetitive development and testing processes that teams use to deliver, manage, and maintain the database and application. This allows ‘Continuous Delivery’ from the point of version controlling changes, to deploying them to different environments, and, when ready, opting to deploy to production. If done well, it can help teams to reduce risk and increase both efficiency and reliability in the database delivery process. Whether or not this can be adopted, the processes, and the requirements for cross-team communication and automation, remain similar.

Design

In any relational database, Entity-Relationship (ER) modelling, assisted by UML, is important. The ER model should be maintained so that it reflects the database definition in code, as it evolves. Although code can be generated from an ER diagram, it is normally merely first-cut code that is then developed at the object and schema level.

Increasingly, large databases are broken down into more manageable logical components and the implementation of a logical component could occasionally involve other types of databases, such as network databases or document databases. In such cases, all components of the database must have interface-specifications, with the aim of minimizing interdependencies. In this ways, the database can be developed in phases, and deployed independently, so that business value can be delivered as soon as possible.

Part of the design process involves the security model, preferably based on schemas and interfaces. At this stage, the requirements of the production team for support, database administration, reporting, monitoring and compliance are recorded. Policies for naming conventions, code formatting, documentation and security will be identified and agreed.

Development

The development phase of the lifecycle begins soon after the stage of refining and approving requirements. Development is best done iteratively, with a lean, frequent release cycle. As well as responding to new requirements from the governance process, the developers will need to respond to the need for changes to any version of the database in production, to deal with bugs and incorrect indexing.

Design and development work should be continuously validated against the user-requirements, and prototyped in a way that encourages participation. The validity of the work of each database development phase, for each logical component, should be continually checked against the output of the previous phase. All changes should be synchronized and verified continuously, with the accent on testing. Smaller, more frequent database changes can allow development teams to integrate their work sooner, see testing results and feedback earlier, and act on any issues much more easily. This allows the possibility to release database changes frequently and faster.

All development work should be verified for accuracy and performance by means of automated testing, through simple unit test, before check-in of changes. ‘Mocks’ are created where necessary for upstream and downstream dependencies.

Regularly, ideally daily, the team should build the current working version of the database, via scripting from the canonical version in source control, and then compare and synchronize it with the version being used for development work.

This version should be subject to automated integration testing for performance and accuracy, and the results of this testing made easily accessible to the developers.

Release

In DLM, the release of a database version is part of a change order, specifying all changes to the design, interfaces and build requirements. Where relevant, the release will also include references to the precipitating change request, or bug ticket.

The release manager / change analyst, (often a project manager), is responsible for expediting the change order through the release process, with the objective being to safely manage and monitor the release through to deployment to the customer.

The database scripts are, at this stage, object build-scripts, but any data-migration scripts that have been developed are included. Along with the database build scripts, the release process will result in a ‘build artefact’ that contains everything required for the build. This can be in a central CMS repository as files or a zipped NUGet or DacPac. This will have been validated in a CI environment as well as in testing and QA, but not yet against a production server.

At this stage, a number of checks or ‘approval gates’ are required. This will probably include hardware platform requirements, implications for compliance, performance testing and review, confirmation of user-acceptance. It will need to be checked for compliance with production requirements for maintenance, support and monitoring.

The release code and components are archived in a central configuration-management archive along with all scripts required for deployment to test and staging. Normally, final tests are made for compatibility with all dependent systems and data feeds. Where scalability testing is difficult to achieve as part of the development cycle, this is delayed until this stage. Where database tables are changed, migration scripts must be created that preserve the existing data and assign it correctly within the new table structure. These are tested against a database server which is as close as possible to production. Roll-back strategies are devised and tested at this stage.

Deployment

The deployment phase aims at actually setting all aspects that are required to make the database operational. To do this, the build artefact from the central repository must be deployed to a staging database set up as a copy of production, in order to generate an upgrade script for deployment. This may involve setting the appropriate infrastructure, deploying the database in its production environment with appropriate data, or preserving the current data. It might also require associated frameworks or libraries. For a complex database, it might require delivering training materials and documentation, and using these to train operations staff. Operations may also require hotline support to be set up for complex issues.

The DBA is likely to have to deal, in script, with a number of issues that are beyond the remit of the developers, such as access-control, hardware specification for data, replication, messaging, alerts, ETL systems, and interfaces to downstream systems dependent on data.

Different teams will have different requirements for this process and there are a range of third-party tools that can help to automate the process that is in place. The size of the production system will dictate the alternatives that are suitable.

As well as confirming and implementing the plans for security, resilience and availability of the database, the DBA will want to review the whole artefact, together with the upgrade script, to check whether it is production-ready, and detect whether there have been any changes to the version of the database in production, since the developers generated the validated script that was used with the build artefact (database ‘drift’).

All databases that are involved in a deployment keep a record of all the changes that have been successfully applied to them. There is also a full record of hardware specification, software installation, and database configuration for all servers and databases involved in production.

There is likely to be an approval gate, with restricted access, as part of the process of deploying to a live production environment. This helps to implement ‘separation of duties’ between development and operations when this is a requirement of the system. By maintaining a central CMS archive, database administrators and other participants in the deployment process can use source control and workflow to track, and report on, all changes, issues and signoffs that mark the progress from development into testing, QA, pre-production, and production environments.

Operations

There is no point in the Database lifecycle where the operations activity doesn’t have a relevance.

Access-control

Access control must be closely allied with domain-wide access control, and require as little DBA intervention as possible. This is achieved by means of schema-based access control using role-based security.

Security

Operations will probably set up intrusion-detection systems and put in place systems for detecting malicious ‘version drift’. There will be auditing systems planned in conjunction with Governance. These should be designed with a long term view of simplicity for the ‘contained’, ‘sunset’ and ‘retired’ phases of a database where security checks cannot be relaxed.

Maintenance

As well as supervising the regime that is designed to provide resilience and high availability, and ensuring that scheduled processes complete successfully, there will be the task of fixing application defects, applying hotfixes for missing or inappropriate indexes, reporting software bugs and determining temporary workarounds, managing addition of new functions to the application.

Support

It is likely that there will be some requirement for multi-level operational support of a database system, unless this is entirely provided by the applications: For ‘hotline’ support, all the likely FAQs that are relevant to the database (e.g. database messages and warnings viewed on the users’ screens, or connection problems) need to be documented. If this requires participation from development teams, then this needs to be fed into the governance process as a production requirement. When an issue happens, several levels of hotline and on-site support will be required, depending on the complexity of the issues. These issues will vary from those easily dealt with by hotline, to those which require a new database version. Some support teams will need guidelines that allow them to decide to what level to escalate a support question.

Training

Training is required for any major change to a system. Normally this is supported with materials supplied by the application development teams and doesn’t directly affect the database. However, training materials for production staff that are required to perform maintenance for the business aspects of the database must usually be provided. Where there is a rapid deployment of changes to a database system over several months or years, there will soon be a mismatch between the production system and the training materials. Ideally, changes in training materials and support materials should be kept in synch with changes in the database and applications.

Monitoring

It is import to be able to head-off likely failures when they are only visible as trends within a monitored system, and before they develop into problems or failures. This requires monitoring systems that can produce alerts on trends away from established baselines. As much of this as possible needs to be automated. Although monitoring systems can help ‘off the shelf’ they cannot measure or monitor business-specific trends within the database such as business transactions or functional usage. For this, the database itself requires instrumentation. It is likely that an effective monitoring system will be a mix of Monitoring tools and scripted processes.

Resilience

All systems for providing high availability and disaster recovery will be agreed with the business and implemented. After implementation, they will be tested to ensure that they are effective and up-to-date.

DLM techniques

DLM is impossible to achieve without being able to automate many of the time-consuming routine processes. The success of DLM also relies on a handful of core working methods that underpin the manageability of the development process. None of these are unique to DLM

Source Control

This makes it easier to share code, and make it easier to inspect. It ensures that no changes can get lost and gives a history of changes to code, explaining when, what and why a change was made. It allows a deployment to always be made from a ‘known state’ or version. It makes it easier to check, supervise, approve and test code as new functionality is developed, and ensures that the right version of code is worked on. It also allows developers to work on different branches of the database, whilst ensuring that modifications to the core functionality of the database are preserved in all branches.

Preservation of existing data

Existing data in a database that is the ‘target’ of a deployment is preserved by means of a migration script. Although a synchronization tool such as DACFx or SQL Compare can normally determine how data should migrate after a table change, a hand-cut migration script allows, where necessary, a more precise way of determining how schema and data changes are made: It is much more easily tested because it automates the change from one version to another. Furthermore, for some types of database changes, the synchronization tool would be unable to interpret the precise intention of the developer and would risk loss of data as a result (table-renaming, splitting or merging columns or Changing the data type/ size of a column). Checking in migration scripts also helps developers to specify exactly how to make specific database changes, especially where a more obvious method would lock essential tables for a significant time. Migration scripts will often involve more than one object, so should be stored separately. The build process needs to figure out where the migration script needs to be applied.

Deployments can be made without synchronization tools where all changes are specified by migration scripts. In this case, source control contains immutable migration scripts rather than object-build scripts, and the build is done by progressive recapitulation of the migration scripts in their original order. This is more appropriate for relatively small databases, unless each release is consolidated into a single script from the individual changes.

Bug Tracking

Where database developers are working closely with testers, and are running regular automated testing, it pays to relate bug fixes with changes to code. A change should, where relevant, refer to the bug it fixes. This should also be true of feature requests and feedback from user-acceptance testing. This is particularly useful where documentation can be updated to pull together information from the separate systems.

Configuration management

In the same way that the DML and DDL source of the database is stored in source control, all materials that are shared between Ops staff should be held in a central repository. This is particularly important for the scripts and structured documents that are used for the automated deployment of databases to a range of servers.

Documentation

Documentation is not only helpful for delivery, but it is also the key to speeding the work of readying a release for deployment. It supports a closer integration between development and operations, and makes hunting down bugs easier. To save repetition or cut ‘n paste, all documentation should have a single canonical source, whether it be in source control, the code itself, the database, the bug tracking system, the workflow system, or as a separate document in source control. The assembly of the documentation needs to draw the appropriate documentation from all the separate sources, using automation wherever possible.

Logging and Log management

Logging is useless without good ways of parsing and analysing the logs so that they can be searched rapidly. Ideally, logging information should be stored in a searchable repository from which graphs, reports, and alerts can be easily generated. This is particularly important with database performance and scalability testing, where a large amount of timing data and counters have to be retrieved and analysed.

Collaboration

There are a range of collaboration tools to make teamwork easier to coordinate. These range from simple team-based ‘chat’ system to full workflow systems, but there is no consensus on what represents the ideal way of coordinating across teams and ensuring that tasks such as ‘signoffs’ are done in a timely manner. There seems to be a preference amongst developers to use as lightweight a collaboration system as possible that is sufficient to allow remote working and sufficient visibility to other teams.

Monitoring

The delivery process as well as the database itself must be instrumented and monitored. This applies to the progress of the release process through to delivery and deployment. Obvious candidates for monitoring would include completed workflows, source-control actions, documentation items, static code analysis metrics, bug resolution, failures in automated integration tests, performance metrics for the database under load tests and scalability tests.

Automation-assisted test

There are many points where tests have to be run. Test Assertions on compilation of functions or procedures, tests for table constraints, unit tests for any routine, integration tests for processes, performance tests, scalability test, security and access-control tests and many others. These tests will supplement manual testing and user-acceptance testing but are an integral part of the build and deployment process. They require data that simulates real production data in its distribution, characteristics and volume. This data must be machine-generated, or obtained via obfuscation of real data.

Tests can be done via a framework such as tSQLt, or a generic test system. It can also be done via PowerShell scripting as long as care is taken with presenting results clearly.

Scripting and workflow

Although tools such as Puppet/Chef, and build/release tools such as Jenkins can help, PowerShell scripting is likely to be the bedrock of any windows-based DLM system because it has support for SQL Server, WMI, Active Directory, performance counters, SMO, DacFx and a host of others. Because of its support for remote operation and Workflow, it is able to script sophisticated processes within the Windows Domain. Octopus Deploy provides PowerShell Pre- and Post- deployment scripts using PowerShell that can be executed on remote machines outside the Windows Domain. Most build and release servers allow functionality to be extended via PowerShell scripting. PowerShell scripts must be kept in a CMS archive along with server setting and other configuration files.

Summary

The best way that database systems can meet the requirement for increasingly complex databases, and ever-more rapid change, is to adopt the same principles that revolutionized manufacturing in the late twentieth century, and which were used effectively for application lifecycle management in the past decades. It is basically an attention to making the techniques, processes and tooling for database more efficient and better integrated. Although these techniques are primarily aimed at making the business of creating, developing and maintaining databases more manageable, repeatable and visible, DLM has the side effect of improving quality and reducing time to delivery. DLM is quite different in many respects from ALM because of the complications of data and the general connectedness of databases, as well as the greater requirement for up-front planning and compliance with organizational policies and standards. Data represents the most valuable asset of many enterprises, and has unique requirements as a consequence.

DevOps, Continuous Delivery & Database Lifecycle Management
Go to the Simple Talk library to find more articles, or visit www.red-gate.com/solutions for more information on the benefits of extending DevOps practices to SQL Server databases.

How you log in to Simple Talk has changed

We now use Redgate ID (RGID). If you already have an RGID, we’ll try to match it to your account. If not, we’ll create one for you and connect it.

This won’t sign you up to anything or add you to any mailing lists. You can see our full privacy policy here.

Continue

Simple Talk now uses Redgate ID

If you already have a Redgate ID (RGID), sign in using your existing RGID credentials. If not, you can create one on the next screen.

This won’t sign you up to anything or add you to any mailing lists. You can see our full privacy policy here.

Continue