Memory Leaks & Debugging
Common memory leak patterns in JavaScript β how to detect, debug, and prevent them with DevTools and proper cleanup.
Memory leaks in JavaScript occur when objects that are no longer needed remain reachable by the garbage collector. Unlike languages with manual memory management (C/C++), JS leaks are always caused by unintentional references β forgotten timers, closures, event listeners, and growing caches.
A memory leak occurs when objects that are no longer needed remain reachable (from GC roots), preventing garbage collection. In JavaScript, this usually means unintentional references: forgotten timers, closures, event listeners, or caches that never evict.
Normal memory usage shows a sawtooth pattern: allocations grow, GC runs and frees memory, repeat. A leak shows a staircase pattern: each GC cycle frees LESS memory because leaked objects accumulate. Eventually, the page crashes with 'out of memory'.
Chrome DevTools Memory tab lets you take heap snapshots. Take snapshot A, perform an action, undo it, take snapshot B. Compare: objects that exist in B but not A are potential leaks. Look for 'Detached DOM' trees and large retained sizes.
Every allocation should have a corresponding deallocation: addEventListener β removeEventListener, setInterval β clearInterval, subscribe β unsubscribe. In React, useEffect's return function is the cleanup mechanism.
Key Concepts
Memory held by an object itself, excluding referenced objects. A string or number in the object's own properties.
Total memory that would be freed if this object were garbage collected (including all objects it keeps alive exclusively).
A DOM subtree removed from document.body but still referenced by JavaScript. The entire tree stays in memory.
Hold references without preventing GC. WeakRef.deref() returns undefined if collected. WeakMap keys are weakly held.
1// Detecting memory leaks with Chrome DevTools:23// 1. Timeline Recording:4// Performance tab β Record β Interact β Stop5// Look for JS Heap growing over time (sawtooth = normal)67// 2. Heap Snapshots:8// Memory tab β Take snapshot β Compare snapshots9// Filter by "Objects allocated between snapshots"1011// 3. Allocation Timeline:12// Memory tab β Allocation sampling β Record1314// Common debugging pattern:15performance.mark("before");16// ... suspected leaky operation ...17performance.mark("after");18performance.measure("leak-test", "before", "after");1920// WeakRef β hold reference without preventing GC:21const weakRef = new WeakRef(largeObject);22const obj = weakRef.deref(); // undefined if GC'd2324// FinalizationRegistry β callback when object is GC'd:25const registry = new FinalizationRegistry((key) => {26 console.log(`Object ${key} was garbage collected`);27});28registry.register(myObject, "myObject-id");
Memory leaks cause progressive performance degradation, UI freezes, and eventual crashes. In SPAs and long-running Node.js servers, they're the #1 production stability issue. Mastering leak detection and prevention is essential for production-grade JavaScript.
Common Pitfalls
1SPA Memory Growth Over Time
Your React SPA's memory usage grows from 50MB to 300MB after 30 minutes of navigation. Users on low-memory devices experience crashes. The sawtooth pattern disappears β GC can't reclaim memory.
Each React component navigation creates subscriptions, event listeners, and closures. Without proper cleanup in useEffect, these accumulate. After 100+ navigations, thousands of orphaned references prevent GC from reclaiming component trees.
Audit every useEffect for a cleanup return function. Use the React DevTools Profiler to identify components that remount frequently. Implement a subscription manager that auto-cleans on unmount. Use WeakMap for component-keyed caches.
Takeaway: SPAs are especially prone to memory leaks because page navigations don't trigger full page reloads. The only GC opportunity is proper cleanup. Every subscription, listener, and timer in your React app must have a corresponding cleanup.
2Node.js Server Memory Leak in Production
Your Express.js API server's memory grows by 10MB/hour. After 2 days, it hits the 1.5GB heap limit and crashes with 'FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed'.
A global Map caches database query results without eviction. Each unique query adds an entry that's never removed. The Map grows linearly with unique requests, eventually consuming all available heap.
Replace the Map with an LRU cache (e.g., lru-cache package) with a maxSize. Monitor with --inspect flag and take periodic heap snapshots. Set up PM2 with --max-memory-restart to auto-restart if memory exceeds a threshold.
Takeaway: Server memory leaks are different from client leaks β they compound over hours/days instead of minutes. Unbounded caches are the #1 cause. Always use an LRU or TTL-based cache and monitor heap usage in production.