Accelerating Digital Transformation: The Role of DevOps and Data

digital transformation

I was recently joined for a live webinar by Tony Maddonna (Microsoft Platform Lead, Enterprise Architect & Operations Manager at BMW) and Hamish Watson (DevOps Alchemist at Morph iT) to discuss their experiences with digital transformation and the impact it had on them, their teams and the wider organization.

One of the key themes that came out of the session was that there’s a lot of variability in DevOps. Each organization has their own setup, processes and tooling but the end goal is always to deliver value to customers. I found the discussions with Tony and Hamish really valuable and even after all my years in the industry was still making notes throughout the session, so I wanted to share some of the key insights from the session with you.

What is DevOps?


One of the key things that people should take away is that my DevOps is going to be different to your DevOps. There are however some fundamental pillars that organizations should apply or consider in their DevOps journey. For many people, when they think DevOps, they think tooling. But you need to look at what that tooling offers, and for me that’s about delivering value.

There’s a whole heap of things involved in that but the key is deploying far more frequently. I believe in the philosophy of increasing the frequency and quality of deployments to deliver value, and if you hang on to those two tenets, you’re going to have a better experience.


I’m glad that Hamish said value delivery, because I’m so tired of hearing that DevOps equals speed. A lot of people are enamored with speed, whether it’s speed of commit, deployment or feature delivery, but they forget about the quality aspect. If I’ve delivered faster, but it’s bad quality, unstable, unreliable or inefficient, that’s a failure.

There’s something to be said about high-speed delivery, especially in the cloud native world where everybody wants something yesterday, but if you don’t deliver with a value stream then you’re missing the point. You need to be making improvements or increasing reliability and efficiencies.

What triggers a move to DevOps?


For me at BMW it was the concept of doing more with less. It tends to start with efficiencies at the profit center – how can we leverage our profits, expand, and get more bang for our buck? Then when you start looking at the different areas, everybody adapted the concept in a way that met their criteria. For example, an infrastructure group doesn’t theoretically develop code, but they develop the processes and methodologies of support and operations. So, how can you make that more efficient, tune it and continuously build upon it to again add that value stream?

The old school mentality is ‘If it ain’t broke, don’t fix it’, and I’m still a big proponent of that, but it doesn’t mean don’t improve upon it. When something just works, we get complacent and as a result don’t revisit it. In the DevOps mentality, we’re always trying new things and striving for improvement whether that be the user interface, the customer experience, or even the business proposition. What can we do to continuously adapt, do more with less and make it more efficient?


Too often companies rely on their customers or end users to be their main quality assurance or quality control team. The number of times I’ve come into a client and they’re doing this big, intensive deploy once a month and their clients are testing it for them. The key takeaway for me is the need for consistency. With data if we get it wrong and don’t have a fix, serious things are going to happen so testing is key.

Creating an environment to incorporate a whole heap of testing like Tony has at BMW means you’re not using your end users as an insurance policy. Customers don’t want to be the quality assurance team, so by focusing on testing, and automating that testing through to the end product, you know it works. That is where moving to DevOps can deliver value.

How do you balance trying new things with not making too many mistakes?


One of the key tenets around DevOps is feedback loops. I’m a huge advocate of having production-like environments available to developers so that they can try stuff out, typically on their laptop or maybe in the cloud. Far too often, testing environments are totally disparate rather than the exact same schema and setup used in production. But you can pull applications, functions, and a database from source control and know that it matched production when it was released. I think this empowerment of developing and testing with a production-like environment is a game-changer.


I agree 100%, and we use ourselves as guinea pigs. When someone gets an idea to change something like a new stored procedure, we bounce it around the team and once we get feedback to say it’s okay, we’ll roll it out. Our second tier of testing is an external provider as they’re always happy to say if something didn’t work. Then we can work out why, whether that’s a rights issue, a process problem, or a security issue. I call it ‘eat your own dog food’. Anything that we do, we use it ourselves. I’m happy to set up a production system and throw my own code out there to see how it reacts because at the end of the day, as long as I don’t impact on production it’s still a test box. Use your team as your litmus.

How do you get started with Database DevOps?


This may sound totally obscure to a lot of people, but my first thing is to put it through a lean process analysis. If you’re not familiar with it, there’s a Six Sigma process for manufacturing which aims to get the most stable environment and the least number of errors, meaning that we use the bare minimum of processes and tools.

One of the first things I did here at BMW was to explain that it was taking too long to develop our standard solution every time Microsoft releases a new version of SQL. Then, I went through all the processes, and of the 872 I identified, I need four. With DevOps too you really need to be lean by default, you do not want a lot of bloats in your processes, or your code. You have to look at your process first and define what you’re trying to accomplish, and the steps required for you to achieve that.

When we did our solution set for SQL 2014, it took about 30 months. Then when we went to 2016 it cut in half, down to about 15 months, and by 2019 we were down to about ten months. For 2022 I plan to hit six months. It’s got to get faster, simpler, more efficient and better quality. You don’t need to reinvent the wheel, just take what you’ve got and make it better.


Often DBAs and data professionals look at the automation that DevOps encourages and they’re scared it will put them out of a job. The alignment for the DBA is, ‘Do you want better working hours? Do you want to get up at three in the morning and do that horrible manual deployment of code into production that no one’s tested, and if it fails, it’s on you?’ No one wants that. So, that’s the way I get DBAs onboard. If we can automate this, you’re going to have more time to tune the indexes, to tune the queries and to actually look at stuff. You can then start to actually make an impact and utilize your ICT to deliver value to clients.

What about compliance?


We’re looking at global data compliance, so we always worry about data privacy. I’m a little less worried if it’s my own backyard, because I have a fence up. If I believe that my infrastructure guys have the IPS system working the way it’s supposed to, and my firewalls are where they need to be, then I should be secure. But I’m also not going to take non-obfuscated or unmasked production data and put it in my test environment.

Unfortunately, I don’t have the luxury of having one standard to adhere to because globally every country and region has a different opinion on this. Nobody can agree, so that’s where I do my 90/10 rule. I try to make sure that I can come up with a common denominator that fits 90% of the criteria, and then whatever is specific to a particular area we customize just for that.

It’s like moving to the cloud. Are you in Azure or AWS? If you’re a global company, chances are you’re in both, but do you have two solutions? No, you want to have one solution that meets both requirements, and then you just tweak it. That’s where I push it back to the lean process, and use that global standard.


I love what Tony said there because masking our data is vital. Here in New Zealand data governance is so important because at the moment we don’t have any cloud providers that allow me to have data sovereignty here in New Zealand. The closest is Australia, so it’s hosted over there. To get any form of cloud adoption over the line, data governance is essential, and we have to look at the controls associated with that, including security.

We need to be doing our data governance lower in our environments, masking our data from production and spinning up production-like environments which allow people to develop against them. By doing this you know that personal information is not going to be visible or, worst-case scenario, leaked. Going that next step with continuous integration, continuous delivery and continuous security means having those controls and measurements the whole way through and incorporating security into our pipeline.

How do you measure the success of DevOps?


I can only speak for my team, because every DevOps team does it their own way. But for me it’s if I deploy code, and it works efficiently and effectively first time. I keep it simple. On the flip side though, I also look at how many mistakes we’ve found. It may sound counter-intuitive but the more mistakes we find, the more productive we’ve been. Mistakes are positive because human nature means we learn from mistakes; we don’t learn from our successes. We appreciate our successes and pat ourselves on the back for them, but at the end of the day, I remember the stuff I did wrong, so I don’t do it again.


As an ex-DBA, I was never paid by the backups I did, I was paid by the restores. So for me the key thing was failure rate, but more importantly the time to restore or roll back, or roll forward. If something goes wrong, how quickly can we remediate it, whether with a bug fix or restoring what we had before? If I could say there’s something wrong in production, and we can remediate it within five minutes because we have all the steps all along the way, to have that confidence, that’s massive.

In terms of making database deployments, how often is best practice?


The least number of times it’s required. Every database is different, and it depends on what that database needs.


I’ve worked in companies where they’re deploying the database multiple times a day with true blue green deployments. However, I’ve also worked with companies that embrace all the tenets of DevOps, and do it fantastically, but they still only deploy the database once every sprint. So as Tony said, it’s whatever works for your situation.

If you want to know more, you can catch the full discussion by watching our webinar, Accelerating Digital Transformation: The role of DevOps and Data.


Tools in this post