Data Governance, DevOps, and Delivery

Comments 0

Share to social media

I sometimes cringe when I remember the intolerance I once had for the governance process in IT as a young developer. It seemed, at the time, tantamount to interference in a process that the developer knows best by dint of being immersed in the task full-time for months on end. In fact, there are a lot of skills and knowledge that are required to make a successful application. Some specialized expertise, such as legal and accounting skills may be brief, though essential; whereas others, such as Operations and Test are required throughout the development cycle. The success of any application hinges on the work of designers and developers, but they are only a part of the team that is required. They are necessary but not sufficient. The IT governance process has an essential role throughout the database and application lifecycle.

The Overall Responsibilities of Governance in Application Delivery

The governance process of any large organization includes a complex system of controls, systems and processes that ensure that decisions are in the best interests of the organization as a whole. The major role of IT governance is in defining the standards for any development, or acquisition of applications, that the organisation undertakes, and for all aspects of operations. These standards must closely reflect, and respond to, the wishes and strategies of the organisation. There will be policies for defining service levels, service quality, continuity and recovery. There will be compliance frameworks and guidelines, too. Governance has the responsibility for sharing them, updating them, extending them, and applying them throughout the database lifecycle.

When it comes to the delivery of applications, this role is extended. A major role of the governance process in IT to support application development is to run checks on any proposed application to ensure that past mistakes are avoided. Although developers will be aware of many of the technical errors that have plagued the delivery of applications in the past, many of the management, legal and business potholes in the development roadmap are less evident to a software engineer. As with traditional engineering, it is when things go wrong that IT, as a profession, builds their knowledge about the many checks that need to be made on any public-facing application. Unlike airplane crashes, or the collapse of buildings, organizations can often hide many of their IT failures from the public eye, but nevertheless it is true to say that application developments within enterprises are very prone to partial or complete failure. The more circumspect and risk-averse the team is toward release and deployment, the less the chance that something bad will happen. This caution doesn’t necessarily equate to speed but aims to help to ensure that the delivery of software meets the organisations objectives. There are many routine checks to be made.

In the more traditional development methodologies, fundamental checks come when the application is completed and moving in fits and starts down the deployment pipeline. If a check is failed, then the deployment was beyond rescue because of the interdependencies. I remember a deployment for a system that was halted at the last minute because the head of the business unit refused to allow his employees to have PCs on their desks for fear that they’d play solitaire all day rather than do their work. Nobody had checked for this type of user-acceptance problem. The application had to be rewritten for Unix-based thin clients.

What are the main concerns? There is always going to be the general governance task of ensuring that the IT applications being introduced are in as close as possible accord with the organization as a whole, and checking that the information systems are safeguarding assets, maintaining data integrity, and operating effectively to achieve the organization’s goals or objectives. Additionally, it bears the responsibility to ensure that the following areas have been covered.

  • Any organization needs to be able to confirm that its public-facing applications conform to the legislation within the areas that they operate. These include security, confidentiality, and privacy.

  • Data has to be retained and used in ways that comply with all the obligations that are appropriate to the nature of the data being held. There are obligations on organizations on the length of time data can, and must be, held.

  • Applications that are used internally by the members of the organization need to be designed to be as usable as possible by staff, and third parties that are obliged to use them. This is usually defined in law, international standards, and by longstanding agreement with staff. With a corporate-scale application, there must be written procedures to meet any ‘usability’ problems.

  • Applications must be accessible to people with disabilities. They should be modifiable to allow their use by people with disabilities. The most obvious ones are colour-blindness, partial sight and limited keyboard abilities. There are likely to be company-wide policies on this topic

  • Data processing must meet all the standards of the industry or the type of organization. Industries generally sign up to voluntary codes of practice, and these must be followed. (see NHS Digital standards as an example)

  • There must be a valid license to use all the components of the application, including open-source modules and libraries.

  • All parts of the application must be completely secure to industry standards and must be proof against penetration testing.

  • The operations team must be confident that the application can be supported within budget.

  • Enterprise applications must have any training element spelt out in detail and budgeted. This is often overlooked and can end up being a huge unexpected extra cost.

  • There must be disaster-recovery and business-continuity plans in place for the application, as well as routine backup and resilience plans.

  • Any financial data and procedures that are used by the organization must have an external audit process that can detect illicit changes and provide reliable evidence.

  • If a release requires shared resources, such as network admin or security checks, its timing must be planned within the context of the wider enterprise by some form of Enterprise Release Management (ERM), particularly where the application is part of a wider large-scale system.

  • The application that is delivered at any stage must meet service level agreements and be close to any service level targets that have been agreed for the application

  • The users of the application and ‘stakeholders’ must be as involved as possible and confident enough that the application will meet their needs so that signoff becomes a formality.

You will notice that several of these checks and activities must be done at a particular stage in the project if both re-engineering and U-turns are to be avoided. This is particularly true of checks for the existence of plans, contracts, and agreements. SLAs, for example, must be in place early on so that the scale of hardware and network provisioning can be gauged and costed.

Continuous Delivery and DevOps

Development teams can generally work best by producing software in short cycles. The Continuous delivery of changes inevitably reduce cost, time, and risk when done properly. You would find it hard to find any developers who would disagree with the idea, but this hasn’t been practical in the past because testing an enterprise-scale application used to take several months.

This fact made infrequent releases inevitable however much we wanted them to be more frequent. It was a source of frustration for business people who urgently needed changes in the middle of this long cycle but found it to be impossible. No matter how desperately they needed the changes, users were had no choice but to wait until the next big release to obtain some important features.

What changed to make continuous delivery possible?

  • The development of rapid testing techniques has allowed for more incremental updates to applications that are in production, via a straightforward and repeatable deployment process. There was a time that testing was extraordinarily slow, in the order of months, because so much of it was manual. Test tools, organizational approaches, and applications have improved enormously in the past decade. Virtualisation and cloud technologies have removed the old hardware restrictions and allowed tests to be run in parallel. By harnessing the expertise of operations experts as well as developers, testers have revolutionized the process.

  • Development tools have allowed the development process to be more visible to governance, so that all activities within IT can opt to prevent problems at the early stages of development rather than merely act as a gate-keeper within the deployment pipeline.

  • Changes in IT culture and the introduction of multi-discipline teams have allowed Operations and Governance to work more closely with the delivery teams and contribute their specialized knowledge at the optimum time. This has made it easier to make informed decisions about processes, technologies, and data without having to repeatedly go back and change code. The aim is to reduce design changes and avoid development U-turns.

  • A more conscious effort to improve team processes, not just by examining better ways of team-working, but by providing easier communication and sharing of materials.

  • The adoption of toolchains, chains of software components, and tools that each do one thing well and share a common way of passing data, aid in the delivery, development, and management of applications throughout the database Lifecycle. Rather than use a restrictive all-encompassing Integrated Development Environment, it allows team members to adopt tools that are a closer fit to their requirements and promotes different teams to work together on delivery.

All these changes have, together, made Continuous Delivery possible, even in the more conservative settings. It is a goal worth achieving even if a continuous release schedule isn’t appropriate for the organisation’s requirements. It is worth achieving continuous delivery so that the organisation can choose to release at a time that best suits the organisation, rather at a time to suit the development cycle. What Continuous Delivery contributes to the application development process is that it allows software to be released to production at any time, through optimization, automatization, and utilization of the build, deploy, test and release process. Once Continuous Delivery is achieved, it then becomes possible to do continuous release if it is right for the organization. This means that Governance and the organization’s management can, at last, decide when to release, purely for business reasons. They are then confident that the time and frequency they decide on will result in stable and reliable release. Continuous Delivery just gives the business the opportunity to deploy the new functionality in a release. Timing of a release might be important: Nobody wants unnecessary changes to an enterprise application during a busy period, for example. It may be that the software requires special industry-standard certification tests by an independent organization before it can be made operational. There may even be a requirement for several business groups or ‘stakeholders’ to do separate acceptance or usability tests. What is important is that the team gets the choice of when to release rather than face the inevitability of long development cycles between releases.

The delivery team must always work closely with governance and operations, whatever methodology is used. However, where a team is working towards continuous delivery, any disruption in the working relationship can be a sticking point. This is a real risk where the very different cultures can cause misunderstandings. To combat this, development teams must include multi-disciplinary team members, particularly in activities where their skills and experience come to the fore, as with system administration, database administration and operations. Delivery teams will also have access to business analysts, QA specialists, developers, as well as functional and technical experts. System administrators are best for installing and maintaining the many new tools and toolchains that are required. Skills at operating system configuration, application configuration, and network configuration are also required. Delivery teams need embedded team members with extensive experience and deep knowledge in system administration and operations who can liaise confidently with operations. They also need embedded team members with the necessary skills to coordinate with governance and make sure the right people are involved at the right time. The project manager is not always the best-placed person to do this, and direct communication can improve speed and clarity.

The key to continuous delivery is to create a deployment pipeline that automates everything that is possible, such as build, integration, and test all the way from source control through to staging. It must provide immediate feedback and be visible to everyone in the delivery team, operations, and governance who is responsible for any aspect of the application. If, for example, a change is made to the graphical user interface or to the way business processes are done, then training materials can be changed to reflect that before release. Testers can work out strategies for increasing test coverage for areas of functionality most likely to be affected by change, and determine what changes need sign-off via user-acceptance tests. Changes in wording can be checked for compliance with the legislative framework and translated in the case of a multi-lingual application. Any change can have repercussions that may not be obvious to the designers, developers or project managers. It is best to let everyone with responsibility for the outcome to be able to check progress.

Continuous Delivery and Governance

The most difficult question is how to facilitate the governance process to fit in with a culture of continuous delivery. The DevOps movement focused also on organization change to support effective collaboration between the many functions involved in Continuous Delivery. It was rare to find a mention of the other groups that had to be involved, though it often happened that the people with operational backgrounds were far more familiar with the parts of IT that were responsible for governance. It turned out mostly that it didn’t matter. The aspects of the DevOps Toolchain that promoted visibility and communication suited governance just fine. Where there is a legal requirement for signoff, as is usually the case with the security of a release, the signatory will have been able to run security checks beforehand, or check whether the recent changes were likely to require a full security check. By being able to see at least a fortnight ahead in planning, Governance can be more confident of the implications of the current sprint.

Database Administration for production will have had a continuous series of versions of the database in staging, will have access to all the code in version-control, and will have the results of load-testing, limit-testing and scalability testing. The DBA will most likely be fully involved with setting up the test cell. Because of all this, there should be no surprises. Furthermore, the DBA will be able to give full and frank feedback to the relevant people within the delivery team.

Summary

It is hard to imagine the same sort of excitement that was generated by the DevOps movement being generated for Governance and Software Development. The group hug with IT managers is hard to imagine. It turns out that the culture shift that brought the two different aspects of IT within organizations together to cooperate is also sufficient to allow Governance to adopt the same approach. Where Continuous Delivery has succeeded, this has happened in an unobtrusive way, using the DevOps techniques. By providing visibility to the process, and better teamwork across IT specialities, governance can shift their involvement earlier in the development process and make sure that any wrong paths are merely short detours. By reducing the need for sign-offs in the deployment pipeline and speeding up the remaining sign-offs to a brief formality, the ideal of continuous delivery becomes much easier to achieve.

References

Load comments

About the author

William Brewer

See Profile

William Brewer is a SQL Server developer who has worked as a Database consultant and Business Analyst for several Financial Services organisations in the City of London. True to his name, he is also an expert on real ale.

William Brewer's contributions