Dig Deeper: Understanding Performance
Memory leaks are just performance assassins in disguise. 🗡️ You think .NET’s Garbage Collector has your back—until your app starts chugging like a 90s dial-up modem. 📉 Let’s crack open the black box of stack vs. heap, GC quirks, and sneaky performance killers, so your code stays lean and mean. 🚀

🧠 Memory Management in .NET
Because memory leaks are just performance assassins in disguise.
Let’s talk about memory. Not the kind where you remember what you had for breakfast (or don’t—no judgment), but the kind your code chews through like a toddler with a bag of Skittles.
You’ve probably heard the phrase “.NET has a Garbage Collector (GC), so I don’t have to worry about memory management.”
That’s adorable. 😆 While .NET’s memory management does handle a lot of the grunt work, treating it like a magical black box will eventually bite you. Hard.
Today, we’re cracking open that black box. We’ll talk about the stack vs. heap, GC quirks, hidden performance killers, and how to tune your code for better memory efficiency. We’ll even peek into direct memory allocation—because sometimes, you need to escape the GC’s clutches entirely.
By the end, you’ll know how to write .NET code that doesn’t secretly hoard memory like a junior dev collecting npm dependencies they’ll never use. 😂
🔍 The Stack vs. The Heap: Where Things Live and Your Best Efforts Die
📌 Stack: The Speedy Short-Term Storage
The stack is like a super-organized assistant:
- 📎 It manages memory in a last-in, first-out (LIFO) order.
- ⚡ It’s fast because it just moves a pointer up or down.
- 🏗️ It stores value types (e.g.,
int
,float
,bool
, structs that don’t escape).
📦 Heap: The Messy Warehouse
The heap is more like that spare room you promise to clean up but never do:
- 📦 It’s used for reference types (e.g., objects, arrays, strings).
- 🗑️ The GC cleans it up, but only when it feels like it.
- 🐌 It’s slower because memory isn’t allocated in a neat order.
We also have the Large Object Heap (LOH), where objects over 85KB live. It's part of Gen 2 in the Garbage Collection System (more on this below), which means Garbage Collection doesn't hit the LOH unless it really needs to. Since LOH objects aren’t moved, frequent allocations of large objects can create holes in memory (heap fragmentation), leading to inefficient usage and eventual Out Of Memory errors.
💡 Newer versions of .NET introduced LOH compaction, but it is disabled by default.
🤦♂️ Where Developers Mess Up
❌ 1. Assuming Value Types Always Stay on the Stack
You hear "Structs are stack-allocated!" all the time. Yeah, until they’re not.
struct Point { public int X, Y; }
class Container { public Point MyPoint; } // Allocated on the heap!
💡 Since Point
lives inside a class (Container
), it ends up on the heap.
❌ 2. Overusing Reference Types When Value Types Would Do
Consider this:
class Employee { public string Name; public int Age; }
struct EmployeeStruct { public string Name; public int Age; }
✅ Employee
is heap-allocated
⚠️ EmployeeStruct
is sometimes stack-allocated, but the string
inside is still on the heap.
🗑️ Garbage Collection in .NET: The Hidden Cost
.NET uses a generational garbage collector, which divides objects into:
- 🍼 Gen 0: Newborns (short-lived objects).
- 🧗 Gen 1: The survivors (if it made it past one GC cycle, it lands here).
- 🗿 Gen 2: The hoarders (long-lived objects that rarely get collected).
⚠️ How This Affects Performance
Every time the GC runs, it pauses your app (yes, even in .NET 8).
✅ Small Gen 0 collections? Fast.
❌ Objects surviving into Gen 2? The GC throws a full collection party.
🔥 You don’t want that party. It’s slow.
🏃♂️ Avoid GC Pain by Reducing Unnecessary Allocations
.NET doesn’t simply allocate one extra memory block when using .Add()
. Instead, it usually doubles the existing capacity (remember, a List is just an array in fancy clothes). So the initial growth will have a lot of memory allocations, but as it grows, memory allocations happen less often (but do more work). Each time, .NET doubles the array size, allocates new memory, copies existing elements, and then discards the old array—triggering more GC work.
public void CompletelyManaged()
{
var list = new List<int>();
for (int i = 0; i < 1000; i++)
{
list.Add(i);
Console.WriteLine($"List size: {list.Count}, Capacity: {list.Capacity}");
}
}
The output of this shows how the array size is doubled once it reaches capacity.
List size: 1, Capacity: 4
List size: 2, Capacity: 4
List size: 3, Capacity: 4
List size: 4, Capacity: 4
// Create a new array, copy 4 items over, unallocate old array
List size: 5, Capacity: 8
List size: 6, Capacity: 8
List size: 7, Capacity: 8
List size: 8, Capacity: 8
// Create a new array, copy 8 items over, unallocate old array
List size: 9, Capacity: 16
List size: 10, Capacity: 16
<...>
List size: 255, Capacity: 256
List size: 256, Capacity: 256
// Create a new array, copy 256 items over, unallocate old array
List size: 257, Capacity: 512
List size: 258, Capacity: 512
<...>
🚀 Instead of resizing dynamically, pre-allocate when possible:
var list = new List<int>(1000); // Pre-allocate to avoid resizes.
// Alternatively, set the capacity as soon as we know it
list.Capacity = someObject.Items.Count();
📌 If we set the list capacity upfront, we avoid all of these allocations and cleanup.
Benchmark Results:
Method | Mean | Error | StdDev | Median |
---|---|---|---|---|
DynamicGrowth | 1,299.5 ms | 175.68 ms | 512.47 ms | 1,022.3 ms |
FixedSize | 124.4 ms | 2.49 ms | 5.19 ms | 124.6 ms |
🤯 Pre-allocating the list is over 10× faster!
❌ Avoid Holding Onto Objects Too Long
The longer an object lives, the more likely it is to get a promotion (just like bad management in corporate jobs).
💀 Memory Leak Example
static List<byte[]> _cache = new List<byte[]>();
void LoadData()
{
_cache.Add(new byte[1024 * 1024]); // 1MB object never removed
}
That _cache
list? It keeps growing, gets moved to Gen 2, and bloats the heap.
✅ Fix:
_cache = null; // Allows GC to reclaim the memory
📌 Large objects (over 85KB) live on the LOH, which doesn’t get compacted, causing fragmentation. You really need to take extra care with objects that live here, or your memory usage will skyrocket faster than the bill for that AWS instance you forgot about.
⚡ Direct Memory Allocation: Escape the GC
🔥 Using stackalloc
for Small Buffers
Span<int> numbers = stackalloc int[10]; // No heap allocation!
✅ Perfect for temporary data where you don’t want the GC involved.
🛠️ Using Memory<T>
for Heap-Friendly Buffers
Memory<int> memory = new int[10]; // Heap-allocated but GC-friendly.
📌 Memory<T>
is like Span<T>
, but safe to return from methods.
🚨 Boxing & Unboxing: The Silent Performance Killer
📌 Why Boxing Sucks
Boxing happens when a value type is wrapped in an object, forcing heap allocation.
Unboxing reverses this process but adds overhead.
int x = 42;
object obj = x; // BOXING (Allocates on the heap!)
int y = (int)obj; // UNBOXING (Extra CPU cycles!)
✅ How to Avoid Boxing
✔ Use Generics Instead of object
void Print<T>(T obj) { Console.WriteLine(obj); } // No boxing!
✔ Use Span<T>
for Collections of Value Types
Span<int> numbers = stackalloc int[1000]; // Stack allocation
🏁 TL;DR
✅ Use Span<T>
and Memory<T>
when possible to reduce heap allocations.
✅ Pre-allocate collections when you know the size upfront.
✅ Avoid boxing/unboxing—it’s a silent performance killer.
✅ Don’t hold onto objects longer than necessary (watch those static fields!).
💀 If you don’t manage memory efficiently, the GC will do it for you… eventually, but you just might not like when it happens.