Product articles Redgate Flyway DevOps collaboration
DevOps Collaboration and Process…

DevOps Collaboration and Process Visibility in Flyway Developments

A brief history of the DevOps movement and a discussion of the pivotal role of a tool like Flyway in the DevOps toolchain, when developing and delivering database changes.

DevOps is a term that, like ‘Agile’, ‘Big Data’ and ‘Web 2.0’ before it, brings a gleam to the eye of IT marketing people. IT terms tend to lose their precise meaning once they are picked up as part of their vocabulary. ‘DevOps’ has avoided this because of the publication of some books soon after the term came into use that explained what it was and nailed down the meaning.

DevOps evolved from the cooperative working practices that developers and IT operations staff developed to avoid the many pitfalls suffered by a large proportion of large IT projects at the time. It involved finding new ways to deliver applications more quickly by combining the skills of two very different types of technologists: developers and Ops people. It involves exploiting the advantages of new and radical technologies to overcome the many blockers on the road to delivering change in an any IT system. It didn’t necessarily mean that each side had to develop new skills, merely that they shared their skillset to help each other out and remove as many blockers as possible from the application development process.

IT gridlock

At the time that the DevOps movement happened, the attention of IT management was focused on the slow progress on the significant milestones of the major ‘re-engineering’ projects for which they were responsible; the ‘diamond points’ in the language of project managers. They concentrated on achieving milestones in time because progress thereby seemed measurable. Milestones looked good in presentations, and so the subsequent delays in delivery were often a surprise.

Unfortunately, the objectivity of project progress was an illusion, and the slippage they measured so scientifically turned out to be a symptom of the malaise, not the cause. The natural reaction to a ‘slippage’ from most managers at the time was to hire more developers on the principle that if ten men took a day to dig a hole, twenty could finish the job by midday. Unfortunately, it usually resulted in twice as many developers facing frustrating and unavoidable delays. Although a lack of development resources can slow progress, and even cause a project to fail, it is more likely to be due to sclerotic management, inappropriate team structures, poor planning, incorrect business analysis and poor quality. It was just as likely to be that the project couldn’t be put into production because of its poor architectural design or that it wasn’t what the users asked for in the first place. Often, a project would become gridlocked when the task of testing ballooned, but bug-tracking and reporting was generally so poor that the deployment process locked up in confusion.

The sclerosis of commercial software development is exemplified by the most notorious IT project in the UK, the Post Office Horizon project, created originally by ICL, that was released despite the dire warnings of the people who participated in its development. It was so riddled with errors that it led to 736 innocent sub-postmasters being wrongly prosecuted for fraud. This led to bankruptcy, imprisonment and suicide. The consequences of poor software delivery can be unimaginably severe.

The problem of ‘silos’

Although everyone could perceive the problems in the industry, in general, nobody could successfully identify a single point of failure, a problem isolated to any single part of the process. Developers were the target of most of the attention, but it turned out that the worst mistakes were due to the traditional ‘Siloing’ of IT departments.

From the ‘worms-eye-view’, it seemed that major IT projects were log-jammed by the absence of the appropriate resources and expertise at the right time. Ops, Development and Test were run as separate departments rather than a cohesive team that could combine their areas of expertise. Sometimes, in some organisations, they seemed almost like rivals, led by management who acted somewhat like quarrelling mediaeval monarchs.

The potential for conflict was wired-in. Traditionally, the management of IT production teams were obliged to prioritize stability over change. This was understandable because software changes invariably came with risks, and architectural changes meant retraining as well as hardware or cloud-service costs.

The management of Development was less risk-averse, and their performance was judged by the organization by the delivery of new functionality to the business. IT developers were rewarded for hitting the milestone where a feature was ‘feature finished’ rather than ‘delivered to production’, which was a point in project-tracking that was outside the direct control of the development.

However, batches of finished features could be delayed for ages, waiting for signoff from a different department within IT. However, these delays were seen very differently by the recipients. The problem from the perspective of operations was the poor quality of the ‘feature finished’ code.

A bug in development is tiresome, in deployment it is the cause of a delay, in production, it disrupts the activities of the organisation. The solution for a while was the creation of test teams that became a ‘third force’ occupying the deployment pipeline, taking ‘feature finished’ code and passing it through a protracted series of tests. This ‘test phase’ could take up to nine months in a large retail bank, because of the huge provision and automation tasks involved in providing a reasonably thorough test coverage for the average corporate application. None of this was helped by the fact that the ‘feature finished’ code was often riddled with ‘technical debt’, and for which the developers had provided too little documentation to create meaningful test harnesses.

The need for DevOps cooperation

DevOps, unlike previous movements in the industry, was a groundswell from developers, testers and operations people, at the sharp end. They were all acutely aware of the current shortcomings in the management of the delivery of corporate and commercial software and database systems. They were also painfully aware that the three different activities required skills that they lacked but were part of the core competence of one or other of the activities.

They needed to cooperate. Testers need the provisioning skills of Ops, and the automation skills of the developers. Ops weren’t always end-to-end scripters of processes, and they weren’t heavily into test techniques. Developers are, traditionally, useless at testing their own work, and often struggled to even imagine the requirements for maintaining live applications in operation, the issues involved in legal compliance, resilience, disaster recovery and the full spectrum of security issues.

The idea of Ops and Development cooperating right through the development cycle came at a time that it suddenly became more necessary. Technologies such as cloning, virtualisation and containerisation, together with the expansion of Cloud facilities, had great potential, but were part of the skill-set of operations, not developers. Likewise traditional provisioning and monitoring of services such as databases. Testers were far more familiar with implementing automated processes on VMs and containers. The developers were able to bring remoting, workflow and other new techniques to scripting for monitoring production systems and tracking production issues.

The success of teamwork based on a diverse skillset was striking. This led to those other specialist teams that were previously semi-detached from the process to join in the idea of giving helpful feedback as early as possible in the development process. Security people, for example, were able to encourage security by design rather than by ‘steel helmet’ thinking. Technical Architects were able to intervene before changes became too expensive to contemplate. Trainers, who had to introduce staff to new corporate systems, were able to prepare training materials and help with usability.

This process became known as ‘shifting-left’ the contribution of specialists into the development process. This then removed the blocker to delivering a continuous flow of stable changes to production. Because the specialists knew what was going on with development and had identified any potential deployment problems within the development review process, development teams could then avoid the use of discrete ‘releases’ that required signoff and sporadic operational expertise within the staging process.

Suddenly, the development logjam that had seemed like an immovable mountain, began to shift, crack and fragment. At last, IT was able to turn the dream of continuous delivery into a sensible choice. Continuous Delivery (CD) exploits Continuous Integration to build, test and deploy new releases to one or more test or staging environments. Automated tests may then run and if successful can then be approved for update to production. Continuous Deployment can be practiced if all the experts and participants who need to sign off a production release are involved in the review processes that are part of integration. Basically, a CD development requires the participation of the appropriate expertise at the right time, and it is this factor that DevOps delivers.

DevOps Development

Although the culture-change in management practices and team structures that I’ve described made DevOps possible, DevOps Development is best seen in practice as a combination of specific practices, culture change, and tools.

Every vendor of development tools would love to suggest that only their tools provide a royal road to DevOps-style Continuous Delivery, but that misses the point. A DevOps tool participates as a cooperative link in a chain of tasks and processes. The idea of a single ‘console’ or ‘IDE’ that can assist the entire application lifecycle is neither desirable nor possible. It is undesirable because it forces a development team into a particular way of working, locking the developers into a single tool supplier, and I’ve never come across an IDE that came close to even covering the entire development lifecycle, let alone supplying the entire diversity of existing requirements even had it been a good strategy for the developers.

This diversity is generally underestimated. There is the type of application, the architecture and platforms involved and the type of development. Although there are many principles in common, there is a world of difference between, for example, embedded systems, relational database systems, and procedural code. The challenges of the systems needed to support the work of an organization, such as a healthcare system, are quite different to the joys and terrors of a small startup with a greenfield application.

The DevOps Toolchain

Unlike a universal IDE, toolchains are made of links, each being a tool that can support a chain, or workflow, of automated or semi-automated processes, or pipelines. To do this, the first criterion is that it must have a command-line interface or CLI, and a corresponding way of outputting data in an open standard. It should be able to originate messages, warnings and errors; either directly or in one of its output streams.

A linked chain of specialized programming tools can perform a complex software development task. In their simplest form, DevOps tools merely work consecutively, like a domino chain with each link being executed consecutively so the output of the previous tool becomes part or all the input for the next one.

However, some DevOps tools such as Git or Jenkins ply more of a coordinating role, and it is usual for a collection of tools, potentially from a variety of vendors, to be used in one or more stages of the lifecycle. The ‘glue’ that creates a toolchain can range from a batch process to a complex workflow. PowerShell is the most powerful of the scripting systems used for DevOps.

Flyway Development

Whatever it is that represents a software change, perhaps a bug-fix, performance tweak, or possibly a new feature, it becomes a task that is tracked, and its lifecycle is controlled in an issue tracker or project management system. It thereby becomes visible to the team and individual tasks are controlled in the way that the team finds most suitable to the development and local circumstances. They must ensure that all the necessary areas of expertise, such as test, security or architecture are brought to bear.

Teams will use a branch to implement every task, so that different expertise, or additional developers, can assist with the task where necessary. All branches must be visible to the rest of the team in case there is added value that they can contribute by pooling their expertise. A task must never be assigned exclusively to an individual. Tasks are best kept short, with clear criteria, and generally shouldn’t last more than a couple of days, depending on the work and team methodology. Sometimes, a feature can be ‘parked’ so that its release is delayed; this will be more likely to present problems with merge but is generally better than releasing it at an inappropriate time.

Every live development task must first undergo a series of unit tests and integration tests to ensure that nothing is broken before it is allowed to enter the development branch. To terminate the successfully completed and tested task there is a ‘pull request’. This usually results in a code review before a merge is allowed. As well as the already-completed automated integration test, each task should be given a usability or sanity test where another team member checks that the task meets the requirement and is likely to make sense for the user. Depending on the nature of the task, other experts may check for issues such as security, compliance, and production-readiness. Lastly is a short exploratory or ‘tourist’ test that double-checks that test-coverage is reasonably complete.

Flyway and Deployment

Flyway is used to migrate a database to a particular version, either the latest version, represented by a range of migration files in a set of locations, or a specific version within the range. Flyway records the version, and the history of changes to reach that version, in a special table within the database. Where the type of database system allows it, Flyway manages the change so it can be rolled back without trace, if there is an error.

Although Flyway projects have traditionally been used for small-scale microservices, they can also meet the more complicated needs of corporate developments, industrial systems, or embedded databases. In these cases, DevOps requires that a development is more ‘visible’, to make it easier to participate and look for potential issues. It also requires more effective automation of the processes involved. This requires a better use of development tools that minimize reliance on manual processes to only what is necessary for the workflow.

Many of the complications of the larger database developments manifest themselves in deployment because teamwork requires more coordination processes and because corporate developments have additional components in production. Flyway deployments must not only deliver each new development version safely to production but also cope with the additional complications that accompany a corporate deployment.

The most obvious changes to a development will be in the access control system for production. Also, production databases will often have scheduled tasks executing batch scripts or SQL scripts, that are either initiated by the operating system or provided by a job or task scheduler. This might simply be a backup system but could also be end-of-day accounting checks, audit, data cleanup, or anything else that can be scheduled. These scripts must be compatible with the new database version being deployed but aren’t strictly part of the database. In certain cases, the database design might involve a distributed architecture across several different databases, perhaps on different servers

Flyway, fortunately, is designed to be a participant within a toolchain that can kick off other processes. The point at which a new version of a database is successfully created is an obvious point where several processes such as documentation, scripting and reporting are started. Flyway is well-placed to initiate these processes. It can be extended, not only via Java, but also via scripting. Flyway’s power comes from its ability to call scripts before, after or during the database migration processes requested of it, before connecting to a database and after events such as statement errors. This allows it to not only be a link in a chain of processes, but also the initiator. If, for example, Flyway runs a unit test as part of a migration and it fails, it can send a message with the details and initiate the bug-tracking process. A successful migration run can initiate a range of tasks, such as running a script to load data, a code-analysis check, to determine what database objects changed, or to document the database. All this means that it can actively participate in, or initiate, workflows.

These features make it far easier to automate, where possible, a database deployment pipeline, especially where the application and database must be deployed together, close-coupled. Even where automation is impossible, Flyway Teams has features that allow it to be part of a workflow, due to the enormous range of settings and configuration that it possesses, and so it is often seen as part of workflows initiated by build automation systems, lifecycle management tools and CI/CD tools

Summary

Many popular development tools that are designed for use by DevOps teams seem rather forbidding on first acquaintance. This is in large part because they are geared for highly automated development processes and are required to work with open standards in such aspects as messaging, file formats and command-line interfaces.

Flyway is no exception: it doesn’t even have a Windows installer. Why? Unlike the previous generation of development IDEs for databases, Flyway doesn’t assume a particular development methodology, and so is more easily accommodated in systems that individual teams have evolved to meet the demands of the organisation or communities that need to use them. Although Flyway is ideal for the migrations-based approach that it was devised for, you can use it for the full range of database developments including traditional builds or hybrid systems, and still get the advantage of its callbacks and versioning.

Nervous adopters will find that it will slot into the place of a traditional database-build system without forcing any change but will be ready for a role as member or coordinator in a DevOps system, if or when the development team is ready for the journey.

Tools in this post

Redgate Flyway

DevOps for the Database

Find out more