Spoofing Data Convincingly: Masking Continuous Variables.

In any real numeric data from a database , you are only rarely going to see any sort of normal distribution of the values. Sales data will rise and fall according to the time of year and the economic cycle. The date of input of a record will vary with the workload. If you plot any real data, you are unlikely to see that bell-curve or straight line. It will look more like the welsh hills. (or, technically ‘multimodal’). Truly independent data will tend to conform with the bell-curve whereas dependent data tends to be multimodal.

Imagine that you have a table giving invoice values. You will want your spoof data to conform with the same ups and downs of the real data over time. You may be able to get the overall distribution the same as the real data, but the resulting data would be useless for seeing the effect of last years sales promotion. The invoice values will depend on your sales promotions if your marketing people have done their job properly.

By making your data the same distribution as your production data, you don’t necessarily get the same strategy chosen by the query analyser, but you dramatically increase the chances of getting it. SQL Server uses a complex paradigm to select amongst its alternative plans for a query. It maintains distribution statistics for every column and index that is used for selecting rows. These aren’t actually histograms in the classic sense, but they perform a similar function and are used by the SQL Server engine to predict the number of rows that will be returned.

The easiest numeric data to spoof is an independent variable, so we can start with that.

We will collect the current values and use them to create a histogram. We then use this to select the bin (I prefer the alternative term ‘bucket’ as the word ‘bin’ is too overloaded!) from which we then generate the random number. This will result in a distribution that is a bit more jagged than it should be, but any sort of interpolation is a bit of a distraction at this stage. It is good enough, because pseudonymization can lack finesse, yet be fit for the intended purpose.

We aim to produce a simple function to produce a random float that can be used to stock the ModifiedDate column of AdventureWorks2016.Person.Person (which we have, of course, copied for the purpose.

@dh is our distribution Histogram. The rest is mere detail!


Here are the first few rows with selected columns from the original table

…and the same rows, now pseudonymized

We can now compare the distribution of the old and new data, using the routine I published here in ‘Visual Checks on How Data is Distributed in SQL Server’


The difference is due to the changed size and ranges of the buckets. It is good enough

We now have a way of randomising independent data, but what if we need to retain the relationship between, for example, date of invoice and invoice amount, We’d need this if we were needing to use data for practice and training in BI, for example. You’ll see that there was a relationship between the modified date and the BusinessEntityID. The easiest way of doing this is to use the original value as a starting point.


Which gives a Modification date closer to the original, possibly at the cost of some loss in the degree of obfuscation.

So we are now making progress. A way of generating continuous variables as floats either entirely randomly or just blurred is going to be very useful because we can coerce a lot of datatypes into floats and back again. What about strings made up of several words? Perhaps it is time to show a way of doing this for a database.