Five real .NET memory leaks from production — the symptoms, debugging steps, and fixes that saved our systems.

One day our .NET application started dying every sixty seconds.
Container restart. Container restart. Container restart.

Like clockwork.

At first, we thought it was a deployment issue. Then we saw the real culprit: memory consumption spiking to 4GB before the container gave up and killed itself.

What followed was chaos: background jobs died mid-execution, message queues overflowed, API calls returned 500s, and Slack exploded with false alerts. All of it traced back to one silent assassin — a memory leak strangling our application.

Memory leaks don’t announce themselves. They lurk, growing with every request, until they bring down everything you’ve built.

Over five years of running .NET in production, I’ve hunted down more of these invisible killers than I care to remember. Here are five real cases — how they appeared in the wild, how we debugged them, and how we fixed them.

Case 1: The HttpClient Leak

⚠️ Symptoms: Memory climbed steadily from 150MB to 3GB over 48 hours. External API calls timed out, socket exhaustion errors flooded our logs. Gen 2 heap size kept growing despite normal GC collections.

🔍 Investigation: Using dotnet-counters, we monitored GC metrics in real-time:

dotnet-counters monitor System.Runtime --process-id 1234

The telltale sign: Gen 0/1 collections were fine, but Gen 2 heap size grew — classic “not disposed” issue.

💀 Root Cause: Our integration service created HttpClient instances everywhere:

// The problematic code
public class PaymentService
{
public async Task<PaymentResponse> ProcessPayment(PaymentRequest request)
{
var client = new HttpClient(); // 🔥 Memory leak bomb
var response = await client.PostAsJsonAsync("https://api.payments.com/process", request);
return await response.Content.ReadFromJsonAsync<PaymentResponse>();
// HttpClient never disposed, sockets never released
}
}

Each HttpClient instance holds onto socket connections and internal buffers. Without proper disposal, these resources accumulate until your app crashes.

🛠 Fix: We implemented IHttpClientFactory and proper disposal patterns:

// The corrected approach
public class PaymentService
{
private readonly HttpClient _httpClient;

public PaymentService(IHttpClientFactory httpClientFactory)
{
_httpClient = httpClientFactory.CreateClient("PaymentAPI");
}

public async Task<PaymentResponse> ProcessPayment(PaymentRequest request)
{
var response = await _httpClient.PostAsJsonAsync("process", request);
return await response.Content.ReadFromJsonAsync<PaymentResponse>();
}
}

✅ Result: Memory stabilized at 180MB, socket exhaustion disappeared entirely.

Case 2: The Runaway Cache

⚠️ Symptoms: Memory grew from 200MB to 8GB over weeks — about 50MB per day under normal load. Cache hit rates were excellent, but something was clearly wrong.

🔍 Investigation: Heap analysis revealed our IMemoryCache contained over 2.3 million cached objects, many dating back weeks. Cache entries had accumulated without any eviction.

💀 Root Cause: We were using IMemoryCache without expiration policies:

// The memory-consuming monster
public class UserProfileService
{
private readonly IMemoryCache _cache;

public UserProfileService(IMemoryCache cache)
{
_cache = cache;
}

public async Task<UserProfile> GetUserProfile(int userId)
{
var cacheKey = $"user-profile-{userId}";

if (_cache.TryGetValue(cacheKey, out UserProfile cachedProfile))
return cachedProfile;

var profile = await LoadUserProfileFromDatabase(userId);

// 🔥 No expiration = infinite memory growth
_cache.Set(cacheKey, profile);

return profile;
}
}

Every user profile was cached forever. With 50,000+ active users, we were building an ever-growing in-memory database.

🛠 Fix: Implemented sliding and absolute expiration policies:

// The memory-conscious approach
public async Task<UserProfile> GetUserProfile(int userId)
{
var cacheKey = $"user-profile-{userId}";

if (_cache.TryGetValue(cacheKey, out UserProfile cachedProfile))
return cachedProfile;

var profile = await LoadUserProfileFromDatabase(userId);

// ✅ Reasonable expiration policies
var cacheOptions = new MemoryCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(4), // Max 4 hours
SlidingExpiration = TimeSpan.FromMinutes(30) // Extend if accessed
};

_cache.Set(cacheKey, profile, cacheOptions);

return profile;
}

✅ Result: Cache memory stabilized at 400MB with automatic eviction of stale entries.

Case 3: The File Stream Nightmare

⚠️ Symptoms: Memory spiked from 300MB to 1.8GB during 50MB file uploads. Memory never returned to baseline, even after uploads completed and forced garbage collection.

🔍 Investigation: Process memory dumps showed thousands of byte[] arrays that weren’t being released. Retention analysis pointed to FileStream and MemoryStream objects in our file processing pipeline.

💀 Root Cause: Our file upload handler wasn’t properly disposing streams, especially in error scenarios:

// The resource leak nightmare
public async Task<string> ProcessFileUpload(IFormFile file)
{
var memoryStream = new MemoryStream(); // 🔥 Never disposed
await file.CopyToAsync(memoryStream);

var fileBytes = memoryStream.ToArray(); // Large byte array stuck in memory

// Process file...
if (someCondition)
{
throw new InvalidOperationException(); // 🔥 Early return = leak
}

var fileStream = File.Create($"uploads/{file.FileName}"); // 🔥 Also never disposed
await fileStream.WriteAsync(fileBytes);

return "Success";
}

Two problems: Streams weren’t disposed, and exception paths bypassed cleanup code.

🛠 Fix: Proper async disposal with try-finally safety:

// The bulletproof approach
public async Task<string> ProcessFileUpload(IFormFile file)
{
await using var memoryStream = new MemoryStream(); // ✅ Auto-disposal
await file.CopyToAsync(memoryStream);

var fileBytes = memoryStream.ToArray();

// Process file...
if (someCondition)
{
throw new InvalidOperationException(); // ✅ Stream still gets disposed
}

var filePath = $"uploads/{file.FileName}";
await using var fileStream = File.Create(filePath); // ✅ Safe disposal
await fileStream.WriteAsync(fileBytes);

return "Success";
}

✅ Result: File upload memory spikes became temporary, memory returned to baseline after each upload.

Case 4: The EF Boundary Breach

⚠️ Symptoms: API responses were fast initially, but memory spiked to 2.5GB during large data queries. Even small queries consumed excessive memory after running large ones.

🔍 Investigation: Heap dumps revealed massive object graphs rooted in DbContext instances that should have been disposed. Entity Framework change tracking was keeping entire object hierarchies in memory.

💀 Root Cause: We were returning EF entities directly from API controllers:

// The dangerous anti-pattern
[ApiController]
public class OrdersController : ControllerBase
{
private readonly OrderDbContext _context;

public OrdersController(OrderDbContext context)
{
_context = context;
}

[HttpGet]
public async Task<List<Order>> GetOrders()
{
// 🔥 Returning tracked entities keeps DbContext + all related data in memory
return await _context.Orders
.Include(o => o.Items)
.Include(o => o.Customer)
.ToListAsync();
}
}

EF’s change tracking kept references to all entities and their navigation properties. Even after the request completed, the JSON serializer and change tracker held onto massive object graphs.

🛠 Fix: Clean DTO mapping to break the Entity Framework boundary:

// The safe, memory-efficient approach
[HttpGet]
public async Task<List<OrderDto>> GetOrders()
{
var orders = await _context.Orders
.Include(o => o.Items)
.Include(o => o.Customer)
.Select(o => new OrderDto // ✅ Project to DTO immediately
{
Id = o.Id,
CustomerName = o.Customer.Name,
Total = o.Items.Sum(i => i.Price),
ItemCount = o.Items.Count
})
.ToListAsync();

return orders; // Clean objects, no EF tracking baggage
}

✅ Result: Large query memory usage dropped from 2.5GB to 400MB, subsequent queries performed normally.

Case 5: The Event Handler Prison

⚠️ Symptoms: Memory consumption grew linearly with request volume. After 10,000 requests, our API used 1.5GB instead of the expected 300MB. Restarting fixed it temporarily.

🔍 Investigation: Using dotMemory profiler, we analyzed heap snapshots and discovered thousands of retained objects that should have been garbage collected. Retention path analysis revealed the culprit: static event handlers.

💀 Root Cause: Our audit logging system subscribed to events but never unsubscribed:

// The memory leak trap
public class AuditLogger : IDisposable
{
public AuditLogger()
{
// Static event - creates strong reference to this instance
UserActionEvents.ActionPerformed += HandleUserAction; // 🔥 Memory leak
}

private void HandleUserAction(object sender, UserActionEventArgs e)
{
// Log the action
}

public void Dispose()
{
// We forgot to unsubscribe! Objects stay in memory forever
}
}

Every request created a new AuditLogger instance, but the static event held strong references to all of them. The garbage collector couldn’t clean up any instance because they were all reachable through the event subscription.

🛠 Fix: Proper event unsubscription in the disposal pattern:

// The corrected approach
public class AuditLogger : IDisposable
{
public AuditLogger()
{
UserActionEvents.ActionPerformed += HandleUserAction;
}

private void HandleUserAction(object sender, UserActionEventArgs e)
{
// Log the action
}

public void Dispose()
{
UserActionEvents.ActionPerformed -= HandleUserAction; // ✅ Clean unsubscription
}
}

✅ Result: Memory usage dropped to normal levels, heap snapshots showed proper cleanup of transient objects.

Memory Leak Debugging Toolbox

When hunting memory leaks in production .NET apps, these tools have saved me countless hours:

Command-Line Monitoring

# Real-time GC and memory metrics
dotnet-counters monitor System.Runtime --process-id <pid>
# Watch for growing Gen 2 heap and failed collections
# Healthy apps show regular Gen 0/1 collections with stable Gen 2

Memory Profilers

  • dotMemory (JetBrains) — Excellent for retention path analysis
  • PerfView (Microsoft, free) — Deep heap dump analysis
  • Application Insights — Continuous monitoring with alerting

Key Metrics

  • Gen 2 heap size — Should remain relatively stable
  • GC collection frequency — Gen 0/1 should be frequent, Gen 2 rare
  • Working set growth — Steady increases indicate leaks
  • Handle counts — File/socket handles should not accumulate

Prevention Checklist

✅ Dispose all resources (using / await using)
✅ Never return EF entities — map to DTOs
✅ Set cache expirations — nothing should live forever
✅ Unsubscribe from events (esp. static/long-lived)
✅ Use IHttpClientFactory — never new up HttpClient manually
✅ Monitor memory in production — alert on growth
✅ Load test file handling — simulate real stress
✅ Review disposal patterns in code reviews

Conclusion

Memory leaks in .NET aren’t rare bugs — they’re predictable traps. The difference between firefighting and prevention is disciplined coding plus monitoring.

Take 30 minutes this week to:

  • Review disposables
  • Check cache expiration policies
  • Audit HttpClient usage
  • Verify DTO boundaries

That half-hour might save you from a lost weekend fighting production fires.

Have you hit a nasty .NET memory leak in production? Drop your story in the comments — let’s build a leak survival guide together.

LINK: https://medium.com/@vivek-baliyan/5-net-memory-leaks-that-took-down-our-apps-and-how-we-fixed-them-2132272f9f54