Modern .NET gives you tools to make data handling effortless, if you know where to look.
ntroduction
If you’ve ever found yourself knee-deep in synchronization locks, ConcurrentDictionary gymnastics, or inexplicable deadlocks, you’re not alone. Many developers try to fix thread safety with brute force. We lock, we await, we synchronize, and in the process, we slow everything down. The truth is, half the time, we don’t need more concurrency tools. We just need better data structures.
.NET has been quietly evolving its collection APIs for years, introducing structures that can save you from race conditions, redundant enumerations, and unnecessary heap allocations all without you touching a single lock statement. In this post, we’ll dive into five powerful yet criminally underrated techniques that will make your code safer, faster, and way easier to reason about.
1. Use TryGetNonEnumeratedCount Before Iterating Over Large Sequences
Have you ever written this?
if (items.Count() > 0)
{
// process items
}
If you’re dealing with a plain List<T>, this is fine. But if items happens to be an IEnumerable<T> coming from LINQ, you just forced it to iterate over the entire sequence to count the elements. That’s O(n) time wasted before your actual work begins.
In .NET 6, Microsoft added a quiet little optimization:
if (items.TryGetNonEnumeratedCount(out int count))
{
Console.WriteLine($"Count is {count}");
}
else
{
Console.WriteLine("Needs enumeration.");
}
Here’s what’s happening: instead of forcing the sequence to enumerate, TryGetNonEnumeratedCount checks if it can retrieve the count directly (like from an array, list, or collection implementing ICollection<T>). If not, it returns false, and you can still fall back to a full enumeration if needed.
Why it matters: This tiny method saves a ton of unnecessary looping, especially when you’re working with deferred LINQ queries or streaming APIs. It’s the kind of micro-optimization that pays off massively when your data sizes scale up.
2. Prefer ImmutableArray<T> for Thread-Safe Read-Heavy Scenarios
If you’re managing configuration data, static lookups, or shared lists that are read often but rarely updated, then ImmutableArray<T> is your best friend.
Consider this naïve setup:
public static List<string> SupportedCultures = new() { "en-US", "fr-FR" };
Now imagine multiple threads reading from it. If you ever modify that list while it’s being read, you risk undefined behavior or runtime exceptions. Locks can fix it, but there’s a cleaner way:
using System.Collections.Immutable;
public static ImmutableArray<string> SupportedCultures =
ImmutableArray.Create("en-US", "fr-FR");
This array is immutable thread-safe by design. Updates are performed by creating a new array snapshot, which is cheap and atomic.
Why it matters: Immutable collections eliminate the need for synchronization primitives in many shared-read scenarios. They make your intent clear: “This won’t change.” That mental model alone saves debugging hours.
3. Use SortedSet<T> or SortedDictionary<TKey, TValue> When Ordering Matters
How many times have you sorted a list after filling it?
var scores = new List<int> { 50, 20, 70 };
scores.Sort();
That’s fine for small cases, but if you’re inserting items over time and frequently need them in order, a SortedSet or SortedDictionary is much better. These maintain order as you insert.
var leaderboard = new SortedDictionary<int, string>
{
[90] = "Alice",
[85] = "Bob",
[92] = "Eve"
};
foreach (var (score, name) in leaderboard)
Console.WriteLine($"{name}: {score}");
No extra sorting required ever.
Why it matters: Using sorted collections eliminates repetitive sort operations and improves code readability. They’re particularly useful for priority systems, leaderboards, and caching layers where ordering is inherent to the logic.
4. Use ReadOnlyMemory<T> for Immutable Data Exposure
Let’s say you have a method that exposes a chunk of data from a buffer:
public byte[] GetData() => _internalBuffer;
You’ve just given external code full access to your internal memory, meaning they can modify it and cause subtle bugs.
Instead, use ReadOnlyMemory<T> to safely expose data:
public ReadOnlyMemory<byte> GetData() => _internalBuffer.AsMemory();
Consumers can read it, slice it, or pass it around without mutating the original buffer.
And since ReadOnlyMemory<T> can represent data from arrays, strings, or even pooled buffers; it’s perfect for high-performance APIs that value safety without sacrificing speed.
Why it matters: ReadOnlyMemory<T> is your “I promise I won’t mutate this” handshake. It provides memory safety, predictable performance, and eliminates extra copying.
5. Use Frozen Dictionary and Frozen Set in .NET 8 for Static Lookups
The new FrozenDictionary and FrozenSet types in .NET 8 are absolute game changers. They’re designed for data that’s initialized once and then never changes, like lookup tables, routing maps, or large static configurations.
Let’s compare:
var normal = new Dictionary<string, int>
{
["A"] = 1,
["B"] = 2
};
var frozen = normal.ToFrozenDictionary();
The frozen version optimizes itself for read performance during creation. Once frozen, it’s immutable and thread-safe, offering lookup times on par with or faster than Dictionary<TKey, TValue> due to precomputed hashing.
Why it matters: FrozenDictionary isn’t just about immutability; it’s about raw speed. When you have read-heavy workloads that never change (such as enums, settings, or metadata), freezing them once at startup saves CPU cycles and prevents accidental writes.
Conclusion
Thread safety doesn’t always mean more synchronization it often means smarter data.
By reaching for ImmutableArray, FrozenDictionary, and their cousins, you’re writing code that’s both performant and easy to reason about. You’ll spend less time chasing deadlocks and more time actually building features.
The next time you see someone adding lock around every dictionary, send them this post. Tell them to freeze their data, not their threads.


















