Enforcement of the GDPR began in May 2018 and across the EU it seems to have been a relatively quiet period, with few fines handed down for non-compliance. Indeed, most organizations probably think all of their preparations have been worth the effort, and are well prepared for complaints from customers or investigations by regulatory authorities.
That confidence may have been shaken in the last week or so with news of Google being fined $57 million for failing to comply with the GDPR. French regulators found that Google hadn’t gained the consent needed for using certain data in personalizing ads, was not clearly presenting information about how the data of users is handled and stored, and made opting out of targeted advertising a difficult process.
What’s important here is that Google is a well-managed, profitable business with the resources and the people to address issues like complying with the GDPR. The company’s compliance pages on its own website even say: “Keeping users’ information safe, secure and private is among our highest priorities at Google”. If Google can be caught out, what about the rest of us?
The fine isn’t actually much for the tech giant – its parent company, Alphabet, has annual revenues of over $100 billion. But it will likely force Google to change the way it handles data, and may also have implications for other tech companies of all sizes. Google is appealing the decision, and this will be an interesting case to follow for data professionals since we may need to ensure that we can comply with the final ruling. Many of us right now view the data in our organizations as belonging to the business we work for, with free reign in how we handle, process and store it. That may change quickly if the ruling is upheld.
Most of the decisions about how companies deal with data are made by others, but data professionals often need to ensure that we comply with whatever rules our organizations decide to follow. This means a number of practices must be considered.
At a high level, we need to know what data is affected by the GDPR or any other privacy regulation. This requires that organizations have a data catalog that allows them to track which data is sensitive and must be handled carefully. Few organizations have a comprehensive data catalog already, so this will be an area to focus resources on during 2019.
Once we’re aware of where sensitive data is stored, we must take steps to protect it throughout our organization. Most companies have implemented security in their production environments, but their data handling practices in development and testing are often not the same. The GDPR calls for anonymization, randomized data, encryption and other protections, which data professionals will need to implement in a consistent manner throughout their IT infrastructure.
Finally, accidents and malicious attacks will take place. This means that every organization really needs a process to detect data loss and a plan for disclosing the issues to customers. Auditing of activity, forensic analysis and communication plans need to be developed, practiced and distributed to employees who may be involved in security incidents.
There may be other preparations needed and, the larger the company, the more work will be required. Tools are critical to ensuring this process can be completed in a timely manner, both to save time and to show regulators that actions are underway to better protect data. The more effort that is put into achieving compliance, the less likely it is that regulators will assess hefty fines like the one being faced by Google. Under the GDPR, fines of up to €20 million or 4% of annual global turnover, whichever is higher, can be levied, so Google actually got off lightly.
Interestingly, the key to compliance is already in play and was called out in the latest Accelerate State of DevOps Report from DORA. For the first time, it called out database development as a key technical practice which can drive high performance in DevOps. It revealed that teams which do continuous delivery well use version control for database changes and manage them in the same way as changes to the application. It also showed that integrating database development into software delivery positively contributes to performance, and changes to the database no longer slow processes down or cause problems during deployments.
The thing is that introducing DevOps practices like version control, continuous integration and automation to database development helps achieve compliance with data protection regulations. By default, it also streamlines and standardizes development, and provides an audit trail of changes that are made. Organizations that adopt it can thus balance the need to demonstrate compliance with the desire to deliver value to customers faster.
There will undoubtedly be plenty of other fines in the future, not just through the GDPR, but the many similar regulations emerging around the world such as HIPAA, SOX, and the SHIELD Act in New York State. It’s worth following the case with Google to see how stringently the regulations will be enforced, because it’s likely new regulations will take a similar stance.
The world of data handling practices is changing and every organization needs to get used to better disclosure of practices, tooling for customers, and protection of the data assets they hold.
During the last two years, Redgate has been preparing for the GDPR to take effect in the European Union. As a company based in the UK, we recognized that there were both challenges and opportunities for our business.
We needed to ensure we were compliant with the regulations, which would likely require us to change our own processes and educate our employees. At the same time, our customers would face similar challenges and we could help them achieve compliance with software tools. You can find out more about those tools on our Compliant Database DevOps pages.
Was this article helpful?