Figuring out uniqueness in large datasets is somewhat trivial in SQL via the DISTINCT
statement. This DISTINCT
technique, however, puts a load on the SQL box to where it is more beneficial to scale out horizontally in managed code instead. Typically, it is much easier to spin more web boxes to handle traffic than it is to stand up a beefier database. Moving the work into managed code scales indefinitely to increase throughput and reduces SQL pressure.
In this take, I will show you some techniques when working with duplicate data in managed code. I will explore common gotchas and show you what to do about this.
The code will be written in C# in .NET 6, so be sure to install a copy on your local machine. If preferred, LinqPad can be used, and you will be able to copy-paste and execute code in this tool. Alternatively, feel free to clone the sample code from GitHub.
To begin, be sure to bring in the following using statements. I will use the stopwatch and write to the console often, so it is good to have these available.
1 2 |
Using System.Diagnostics; using static System.Console; |
I will use a timer to take performance snapshots for each technique. The goal here isn’t to be precise but to get a general feel for what is happening under the covers. I am running this code using a Debug build on an Intel 11th gen i7 1.4 GHz with 8 cores and 32GB of RAM. Your exact results may vary on your machine.
A simple example
A typical use case, and one you may already have experience with, is figuring out distinct numbers in a list of arbitrary numbers. C# makes this relatively painless via Distinct.
1 2 3 4 5 6 |
WriteLine("Simple Distinct"); new[] {1, 1, 2, 3, 4, 5}.Distinct() .ToList() .ForEach(n => Write(n + " ")); // 1 2 3 4 5 WriteLine(); WriteLine(); |
Because value types have good equality support, the built-in functionality works effortlessly. The underlying algorithm can distinguish between different numbers and find unique values in the set. However, a real issue starts to emerge with reference types.
Bare distinct
In the real world, complex data types dominate business applications. Say I have a bunch of lab rats, three million to be exact, and I need to figure out a distinct set without any duplicates.
In object-oriented fashion, a lab rat might look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
public class LabRatA { public string Name { get; set; } = string.Empty; public int TrackingId { get; set; } public Color Color { get; set; } } public enum Color { Black, White, Brown }; |
This lab rat has a name, tracking id, and color. This is an anaemic type because it is just a collection of properties without any encapsulated logic.
To generate three million rats, an extension method like Times
comes in handy:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
const int uniqueEntries = 300; var ratsA = 3000000.Times(x => new LabRatA { Name = "LabRat_" + x % uniqueEntries, TrackingId = x % uniqueEntries, Color = (Color)(x % uniqueEntries) }).ToList(); public static class EnumerableExtensions { public static Ienumerable<T> Times<T>( this int count, Func<int, T> func) { for (var i = 1; i <= count; i++) yield return func(i); } } |
A modulus with uniqueEntries
will generate duplicate data every three hundred entries. This makes the entries 300/3,000,000 unique or about 0.01 percent unique. A somewhat realistic scenario when you have a ton of duplicate records.
The extension method makes this a bit more fluent and easier to express in code. This method extends an integer like 3000000 and lets the code dot into a lambda expression that generates lab rats.
A way to figure out distinct entries can be done like this:
1 2 3 4 5 6 |
WriteLine("Bare Distinct"); stopWatch.Start(); WriteLine(ratsA.Distinct().Count()); // 3000000 stopWatch.Stop(); WriteLine($"{stopWatch.ElapsedMilliseconds} ms"); // 451 ms WriteLine(); |
This, of course, does not work as intended and lacks good performance. This is because the algorithm has no choice but to check each instance, and every lab rat takes a unique reference in memory. There is quite a bit of churn here since every entry is considered unique.
To remedy this issue, it will need a way to do equality checks on complex types. Remember that without knowing the equality between lab rats, the algorithm is left with no choice but to do reference checks which is undesirable. Every complex type has a default comparer, and what is shown here is the bare behavior without explicitly defining equality.
Naïve distinct
One way to establish equality is via the EqualityComparer<T>
interface. This requires two methods: Equals
and GetHashCode
. Figuring out equality is trivial but computing a hash code is optional.
1 2 3 4 5 6 7 8 9 |
public class LabRatANaiveComparer : EqualityComparer<LabRatA> { public override bool Equals(LabRatA? x, LabRatA? y) => x?.Name == y?.Name && x?.TrackingId == y?.TrackingId && x?.Color == y?.Color; public override int GetHashCode(LabRatA obj) => 1; } |
This implementation decided to skip computing the hash code by forcing a constant. This is a nice shortcut, and without knowing what this hash exactly does, it feels like the right choice.
To test what this does:
1 2 3 4 5 6 7 8 |
WriteLine("Naive Distinct"); stopWatch.Restart(); WriteLine(ratsA.Distinct( new LabRatANaiveComparer()) .Count()); // 300 stopWatch.Stop(); WriteLine($"{stopWatch.ElapsedMilliseconds} ms"); // 4397 ms WriteLine(); |
This returns the correct number of distinct lab rats, but the performance is a dismal 4.4 seconds. Ten times slower than the simple incorrect implementation, which puts this code in a precarious situation. Of course, the opposite might be true, but, oftentimes, correctness is preferable to better performance. Ideally, you want the code to give you both correctness and good performance.
Proper distinct
There is a way to compute a hash for multiple properties based on a tuple to put in a good hash code calculation. Say there is a name, tracking id, and color for a lab rat. A tuple with these three properties has a somewhat unique hash code.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
public class LabRatAProperComparer : EqualityComparer<LabRatA> { public override bool Equals(LabRatA? x, LabRatA? y) => x?.Name == y?.Name && x?.TrackingId == y?.TrackingId && x?.Color == y?.Color; public override int GetHashCode(LabRatA obj) => (obj.Name, obj.TrackingId, obj.Color) // tuple .GetHashCode(); } |
The Equals
method remains intact, and this uses the hash from the tuple instead of a dumb constant. With a properly computed hash code, check to see what this does:
1 2 3 4 5 6 7 8 |
WriteLine("Proper Distinct"); stopWatch.Restart(); WriteLine(ratsA.Distinct( new LabRatAProperComparer()) .Count()); // 300 stopWatch.Stop(); WriteLine($"{stopWatch.ElapsedMilliseconds} ms"); // 252 ms WriteLine(); |
This time the performance is even better than the bare distinct, about twice as fast, and this returns a correct result. The reasons for the performance gains are twofold: this no longer churns through millions of records, and there is something very interesting happening with this hash code.
Why is this happening?
To find out what is happening, I will need to F12 into the Distinct method and decompile the code. This is possible in LinqPad, and any IDE available today. The focus is .NET 6, so keep in mind that implementation details are likely to change in future releases.
This is what you might see, it is not the entire code but only a small chunk of it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
public class DistinctIterator { private readonly LabRatA?[] _entries; // hash set private readonly IEqualityComparer<LabRatA> _comparer; public DistinctIterator() { _entries = new LabRatA[300]; // max 300 rats _comparer = new LabRatAProperComparer(); } public IEnumerable<LabRatA> Distinct(List<LabRatA> rats) => rats.Where(AddIfNotPresent); // loops private bool AddIfNotPresent(LabRatA rat) { var hashCode = _comparer.GetHashCode(rat); var bucket = GetBucketRef(hashCode); var i = (int)bucket; while (i >= 0) { var entry = _entries[i]; if (entry != null && _comparer.GetHashCode(entry) == hashCode && _comparer.Equals(entry, rat)) { return false; } i = -1; // churn } if (_entries[bucket] != null) return false; // collision _entries[bucket] = rat; return true; } private uint GetBucketRef(int hashCode) => (uint)hashCode % (uint)_entries.Length; } |
These are the findings:
- The
DistinctIterator
loops through the entire list at least once - The algorithm builds an internal hash set to nuke duplicate entries
- The hash code optimizes the nested while loop via buckets
- When the hash code causes collisions, the algorithm churns inside the nested loop
Please note that this code is far from complete and only focuses on the hash code. The nested while loop does not actually churn but escapes the loop, which means it can’t find all distinct values. This is mostly for the sake of brevity to avoid smacking you with a wall of code.
What is most interesting is this explains the poor performance seen in the naïve comparator. When the hash code collides for all entries, the algorithm must work harder. This spikes the complexity to a quadratic, or Big-O O(n^2) complexity. If the hash theoretically causes no collisions, expect linear or O(n) complexity.
In performance-sensitive code, it may make sense to do away with the built-in hash code entirely and switch to one that causes even fewer collisions, like murmur hash, for example.
Even though this implementation is far from complete, it is still possible to run the code:
1 2 3 4 5 6 7 8 |
WriteLine("Simple DistinctIterator"); stopWatch.Restart(); WriteLine(new DistinctIterator() .Distinct(ratsA) .Count()); // 194 stopWatch.Stop(); WriteLine($"{stopWatch.ElapsedMilliseconds} ms"); // 284 ms WriteLine(); |
The Count does not find all distinct lab rats and this count changes per execution. This is because the built-in hash code changes per run, which means that the hash isn’t deterministic but dynamic.
Although the implementation of Distinct might be updated in the future, it is unlikely that this external hash code contract will change because it is a critical part of determining equality and optimizing internal algorithms in .NET.
A bit of asynchrony
Say there are two lists that need to be turned into a unique set and they are coming from asynchronous data sources. Unfortunately, the framework has nothing built-in to deal with this but this extension method can come in handy.
1 2 3 4 5 6 |
Public static async Task<Ienumerable<TR>> SelectManyAsync<T, TR>( this Ienumerable<T> enumeration, Func<T, Task<List<TR>>> func) => (await Task.WhenAll(enumeration.Select(func)) .ConfigureAwait(false)) .SelectMany(s => s); |
Be sure to put this inside the existing EnumerableExtensions
static class.
This grabs a parameter list and feeds it to the lambda function. Then, runs everything in parallel and returns a combined list with all the duplicate records.
To get a unique set from the combined async list:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
WriteLine("Async Distinct"); stopWatch.Restart(); WriteLine((await new []{"A", "B"} // parameter list .SelectManyAsync(GetLabRats)) .Distinct(new LabRatAProperComparer()) .Count()); // 300 stopWatch.Stop(); WriteLine($"{stopWatch.ElapsedMilliseconds} ms"); // 516 ms WriteLine(); Task<List<LabRatA>> GetLabRats(string type) => type switch { "A" => Task.FromResult(ratsA), "B" => Task.FromResult(ratsA), _ => throw new ArgumentException("Invalid rat type") }; |
This technique remains performant because the elapsed time only grows linearly based on the number of records, which is now six million so double the records. This has linear complexity because the parameter list remains constant. If the parameters become a boundless list, each with millions of records, then the code will spike to a quadratic complexity.
Distinct with C# records
If you are already on .NET Core, C# +8 introduces records to the mix; these provide built-in functionality for encapsulating data. One of its core features is value equality. For record types, two records are equal is they have the same type and store the same values.
To play with records, fire up another lab rat:
1 |
public record LabRatB(string Name, int TrackingId, Color Color); |
This has the same properties as before but expressed in less code. Because equality is built-in, it is possible to figure out distinct lab rats without implementing a comparer.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
var ratsB = 3000000.Times(x => new LabRatB ( "LabRat_" + x % uniqueEntries, x % uniqueEntries, (Color)(x % uniqueEntries) )).ToList(); WriteLine("Record Distinct"); stopWatch.Start(); WriteLine(ratsB.Distinct().Count()); // 300 stopWatch.Stop(); WriteLine($"{stopWatch.ElapsedMilliseconds} ms"); // 948 ms WriteLine(); |
Notice the performance gets dinged a bit. This is because the built-in implementation does not use the tuple technique, which causes more collisions.
This code is equivalent to the following but using a class instead of a record:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
public class LabRatC : IEquatable<LabRatC> { protected virtual Type EqualityContract => typeof(LabRatC); public string Name { get; init; } = string.Empty; public int TrackingId { get; init; } public Color Color { get; init; } public override int GetHashCode() => HashCode.Combine( EqualityComparer<Type>.Default .GetHashCode(EqualityContract), Name.GetHashCode(), TrackingId.GetHashCode(), Color.GetHashCode()); public bool Equals(LabRatC? other) => Name == other?.Name && TrackingId == other.TrackingId && Color == other.Color; } |
The biggest difference here is using the hash combine helper, which is what a C# record uses internally, and this can be overridden. Notice it is also possible to avoid defining a comparer by simply inheriting IEquatable
in the target class. Records implement this equatable interface too to provide equality functionality.
Now to verify this code works:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
var ratsC = 3000000.Times(x => new LabRatC { Name = "LabRat_" + x % uniqueEntries, TrackingId = x % uniqueEntries, Color = (Color)(x % uniqueEntries) }).ToList(); WriteLine("IEquatable Distinct"); stopWatch.Start(); WriteLine(ratsC.Distinct().Count()); // 300 stopWatch.Stop(); WriteLine($"{stopWatch.ElapsedMilliseconds} ms"); // 1092 ms WriteLine(); |
The elapsed time is different here is because the built-in hash code calculation is dynamic. This is one tidbit to keep in mind, hash codes are more of a moving target so don’t expect consistency between different types.
Conclusion
Figuring out uniqueness in managed code can be useful in taking the load off the database. The hash code dictates the efficiency of the distinct algorithm, so the best approach is to avoid too many collisions.
If you like this article, you might also like Functional monads in C#
Load comments