Notifiable Data Breaches – and how to avoid them

Photo by luis gomes from Pexels

With the significant growth of data across organizations and the increase in regulations everywhere aimed at protecting that data, the words ‘data breach’ aren’t something any organization wants to hear. That’s the message we often hear in conversations with customers. That said, I thought it would be good to share some insights on what data breaches are, why they occur and how we’ve seen businesses addressing the challenge.

In Australia, a good starting point is the Notifiable Data Breaches (NDB) scheme which The Office of the Australian Information Commissioner (OIAC) rolled out in February 2018 to improve consumer protection and drive better security standards for protecting personal information. It applies to agencies and organizations covered by the 1988 Privacy Act, and the OAIC defines an eligible data breach as where:

  • There is unauthorized access to or unauthorized disclosure of personal information (or the information is lost in circumstances where unauthorized access to, or unauthorized disclosure of, the information is likely to occur); and
  • A reasonable person would conclude it is likely to result in serious harm to any of the individuals whose personal information was involved in the data breach; and
  • The entity has not been able to prevent the likelihood of serious harm through remedial action

The scheme has teeth too. If an organization hides a data breach or fails to report it, penalties under the Privacy Act apply. Where breaches are serious or repeated, that’s fines of up to AU$2.1 million for organizations and AU$420,000 for individuals. The Australian government also has plans to amend the Privacy Act and increase the fines to AU$10 million, or three times the value of any benefit obtained through the misuse of data that has been breached, or 10% of an organization’s turnover, whichever is the greater sum.

Where do notifiable data breaches come from?

In the OAIC’s most recent Notifiable Data Breaches Report covering January to June 2020, breaches related to human error were responsible for 34% of the overall total, an increase of 7 percentage points on the previous 6 month period. Malicious and criminal attacks also accounted for 61%, whereas system fault was only responsible for 5%. The top five industries sectors affected were Health service providers; Finance; Education; Insurance; and Legal, accounting & management services.

While the number of breaches was down by 3% compared to the previous six months, that’s hardly a surprise, given the current situation. The Six-Month Data Breach Analysis for January to June 2020 from the widely respected – and quoted – Identity Theft Resource Center in the US saw a 33% drop, for example.

What’s worrying is that the number of breaches in Australia was still 16% higher than those notified for the same period in 2019. So while the short term trend saw a small dip, the longer term trend is still upwards.

Another important point to note here is that just over a third of breaches were down to human error. Most organizations typically concentrate on protecting their networks and servers from external actors like hackers, but this shows that it is just as important to protect data from internal threats.

How can breaches be avoided?

These insights raise a number of questions for organizations, most notably around how to protect their data safely and ultimately prevent or reduce the risk of a data breach.

One key area to start reducing risk is the database itself. Many organizations are sitting on decades worth of data and are unsure about its complexity and the threats it exposes the business to. That data can also be in a number of different databases, in a variety of locations, and database copies may well be in use in development, testing and BI environments.

This leaves organizations in a dilemma because if they don’t understand the complexity or the threat, they can neither guarantee no harm will occur in the case of a data breach, nor take the remedial action required to prevent the harm taking place. As the OAIC says in its Notifiable Data Breaches Report:

The capacity to conduct a timely and thorough assessment and investigation of a suspected data breach can be constrained when an entity does not comprehensively understand its own information environment.

Hence the need for organizations to initiate a full discovery of their database estates to understand where and what data is held, the sensitivity and consequent risks to that data, and the threat to the business should a breach occur. Once they’ve built up a full and detailed picture, they can catalog and classify the data based on its sensitivity and remediate any risk using techniques like data masking.

That way, even if a breach does occur, it won’t result in serious harm to individuals and it can be demonstrably shown that the obligations under regulations like the NDB scheme have been fully complied with.

This isn’t a one-time task

An important point to note is that this is an ongoing exercise. Databases are, by their very nature, constantly refreshed with new and changing data which will need to be cataloged and classified, with sensitive data masked.

Fortunately, however, third party tools are available that automate the process, reduce the possibility of human error, and provide certainty that new data entering the database is protected to ensure long term compliance moving forwards.

Data cataloging, protection and privacy tools will be key to holding this complex operation together, and have a crucial role to play in understanding the data organizations have and protecting it, empowering businesses to transform their strategies around data protection.

A great example is the Professional Association of SQL Server (PASS). With its worldwide membership, it has to ensure ongoing data security and compliance with regulations like the GDPR in the EU and the CCPA in the US, as well as the NDB in Australia.

Using Redgate’s SQL Data Catalog and Data Masker tools, it was able to introduce a streamlined and trusted process for classifying data and masking the data that is sensitive. There’s a useful case study you can read which looks deeper into the issues they faced, how they resolved them, and the benefits they gained.

For more information about how Redgate can help you discover, classify and apply masking to your data to gain a deep understanding of your databases and ensure protection of that data, visit our solution pages online.

Tools in this post

SQL Data Catalog

Discover and classify sensitive data across your SQL Server estate

Find out more