It’s just over two years since the GDPR started being enforced and it’s also the month when many businesses in the US now need to comply with the CCPA. So it’s an opportune time to talk about one area of compliance that continues to be a stumbling block.
As many compliance teams and DBAs will have discovered on their journey to meet the requirements of the GDPR and now the CCPA, cataloging data is the starting point and the initial effort typically involves three steps.
- Identifying where data is. It’s normal for data to spill over into backups, database copies used by development, QA and BI teams, legacy systems that are still running old but important applications, and databases in outlying offices and partner sites.
- Consolidating the data. It should be in as few locations as possible, so that access can be restricted and you can pinpoint where any and every record is.
- Classifying the data. Data needs to be clearly and consistently labelled with the categories the separate pieces of data fall into so that, for example, GDPR-related data can be demonstrably identified.
The point of the exercise is to find all of the Personally Identifiable Information (PII) a business or organization holds and classify those columns which hold personal or sensitive data that needs to be protected.
Under the GDPR, that’s any information that can be used to directly or indirectly identify an individual like their name or IP address, as well as sensitive data such as ethnic origin, political opinions, and genetic and biometric data.
The CCPA is similar and covers information that identifies, relates to, describes, or is reasonably capable of being associated with a consumer or household. It also lists ten different types of data, ranging from names and addresses to race, biometric data and internet browsing history.
That’s not the end of the story
Identifying, consolidating and classifying the data within a business or organization takes time, effort and a significant amount of engagement with multiple stakeholders. It will, though, highlight the personal and sensitive data that needs to be further protected with remediation activities such as masking or encryption, so that you put in place procedures and processes to demonstrate compliance with data protection regulations.
But what about the data that continues to come through your business every day? Transactions, new customer orders, upgrades, service and support, marketing campaign responses, interactions with your product or service – the list goes on. All of that data has to be identified, classified and labelled in exactly the same way as your current data so that you can demonstrate ongoing compliance.
This is the point where many businesses and organizations realize that creating – and maintaining – a data catalog isn’t a one-time exercise but a long term strategy to ensure future compliance. In order to properly protect the sensitivity of the new data they collect, they need a method that eases the classification and cataloging so that it can be protected – and searched for and found – easily.
There is an easier path
When the Professional Association for SQL Server (PASS) was on its own journey to compliance with the GDPR, the IT team were finding it hard to identify and tag data collected by new features being introduced.
In order to provide a reliable record of where sensitive data is located and its precise classification they turned to Redgate’s SQL Data Catalog. What really interested the team at PASS was how the solution moves discovering and classifying data from a difficult and uncertain exercise to one that is clear and simple.
The single pane of glass provided by SQL Data Catalog makes it easy for the team to see how many columns contain sensitive data, and automatic suggestions coupled with advanced search and filtering speed up classification tasks.
On an ongoing basis, any questions or issues about cataloging data have become easy to resolve because the user interface of SQL Data Catalog makes it simple to navigate the product. Anyone on the team can now find out what they want and go back and revisit anything they need to update.
Discover how you could gain similar advantages
SQL Data Catalog simplifies and automates the discovery and classification of data stored in on-premises, Azure SQL Server, and Amazon RDS databases. It enables SQL Server DBAs to build a clear and accurate picture of their estate, including databases used by third-party apps, so you can understand the true scope of your data and meet compliance targets by:
- Analyzing: Gain a clear picture of data stored in on-premises, Azure SQL Server, and Amazon RDS databases
- Simplifying: Automate the discovery and classification of sensitive data
- Combining: Integrate with enterprise data catalogs and metadata management solutions
- Complying: Underpin and evidence your technical policy for regulatory compliance
SQL Data Catalog helps you build an understanding of your enterprise data quickly and at scale, while empowering SQL Servers DBAs to work alongside other data owners and information security teams.
Was this article helpful?