- Published on
Smart Async Memoization: Cache Success, Clear Failures
Memoization is a cornerstone of performance optimization, but async functions throw a wrench into traditional caching strategies. The naive approach caches everything—including failures—which creates a particularly nasty UX problem: failed requests get stuck in cache, preventing automatic retries and leaving users stranded with stale error states.
Let’s explore a surgical approach that preserves successful results while gracefully handling failures.
The Core Problem
Standard memoization treats all results equally:
// Naive memoization - caches everything
const cache = new Map<string, Promise<any>>();
function naiveMemoize<T>(key: string, fn: () => Promise<T>): Promise<T> {
if (cache.has(key)) {
return cache.get(key)!;
}
const promise = fn();
cache.set(key, promise);
return promise;
}
This breaks down with network failures. A temporary 500 error gets cached indefinitely, blocking all subsequent attempts until manual cache invalidation. Users see persistent error states even after network conditions improve.
The Elegant Solution
The key insight: invalidate cache entries only when promises reject, preserving successful results.
class SmartCache<T> {
private promise?: Promise<T>;
memoize(fn: () => Promise<T>): Promise<T> {
if (!this.promise) {
this.promise = fn().catch(error => {
// Critical: clear cache before re-throwing
this.promise = undefined;
throw error;
});
}
return this.promise;
}
}
// Usage
const dataCache = new SmartCache<ApiResponse>();
const getData = () => dataCache.memoize(() => fetch('/api/data').then(r => r.json()));
The magic happens in the catch block: we clear the cache reference before re-throwing. Successful promises remain cached; failed ones don’t pollute future calls.
Scaling with Arguments
Real-world functions need parameterization. Here’s a type-safe approach using Maps:
class ParameterizedCache<Args extends readonly unknown[], T> {
private cache = new Map<string, Promise<T>>();
private serialize(args: Args): string {
return JSON.stringify(args);
}
memoize(args: Args, fn: (...args: Args) => Promise<T>): Promise<T> {
const key = this.serialize(args);
if (!this.cache.has(key)) {
const promise = fn(...args).catch(error => {
this.cache.delete(key);
throw error;
});
this.cache.set(key, promise);
}
return this.cache.get(key)!;
}
// Explicit cache management
clear(args?: Args): void {
if (args) {
this.cache.delete(this.serialize(args));
} else {
this.cache.clear();
}
}
}
// Type-safe wrapper
function createMemoizedFetcher<T>(baseUrl: string) {
const cache = new ParameterizedCache<[string, RequestInit?], T>();
return (endpoint: string, options?: RequestInit) =>
cache.memoize([endpoint, options], async (endpoint, options) => {
const response = await fetch(`${baseUrl}${endpoint}`, options);
if (!response.ok) {
// Use response.text() - statusText isn't available in HTTP/2
const errorText = await response.text();
throw new Error(`HTTP ${response.status}: ${errorText}`);
}
return response.json();
});
}
Advanced: AbortController Synthesis
The complexity escalates when dealing with AbortControllers. Multiple callers might use different AbortSignals, but we want to share the underlying request. The solution: synthesize an internal AbortController and proxy cancellation logic.
class AbortAwareCache<T> {
private promise?: Promise<T>;
private internalController?: AbortController;
private activeSignals = new Set<AbortSignal>();
memoize(fn: (signal: AbortSignal) => Promise<T>, signal?: AbortSignal): Promise<T> {
if (!this.promise) {
this.internalController = new AbortController();
this.promise = fn(this.internalController.signal).catch(error => {
this.cleanup();
throw error;
});
}
// Track this caller's signal
if (signal) {
this.activeSignals.add(signal);
// If caller cancels, remove from active set
signal.addEventListener('abort', () => {
this.activeSignals.delete(signal);
this.checkShouldAbort();
});
}
return this.promise;
}
private checkShouldAbort(): void {
// Abort internal request only when ALL callers have cancelled
const hasActiveSignals = Array.from(this.activeSignals)
.some(signal => !signal.aborted);
if (!hasActiveSignals && this.internalController && !this.internalController.signal.aborted) {
this.internalController.abort();
this.cleanup();
}
}
private cleanup(): void {
this.promise = undefined;
this.internalController = undefined;
this.activeSignals.clear();
}
}
React Integration Patterns
For React applications, wrap this in custom hooks:
function useApiCall<T>(
endpoint: string,
options?: RequestInit,
dependencies: React.DependencyList = []
) {
const [state, setState] = useState<{
data?: T;
error?: Error;
loading: boolean;
}>({ loading: false });
const cache = useMemo(() => new SmartCache<T>(), []);
const execute = useCallback(async () => {
setState(prev => ({ ...prev, loading: true, error: undefined }));
try {
const data = await cache.memoize(() =>
fetch(endpoint, options).then(r => r.json())
);
setState({ data, loading: false });
} catch (error) {
setState({ error: error as Error, loading: false });
}
}, [endpoint, JSON.stringify(options), ...dependencies]);
useEffect(() => {
execute();
}, [execute]);
return { ...state, refetch: execute, clearCache: () => cache.clear() };
}
Production Considerations
This pattern shines in scenarios with:
- Expensive computations that succeed most of the time
- Network requests with transient failures
- User-triggered actions that should retry seamlessly
- Real-time updates where stale success is better than cached failure
For complex applications, consider abstracting this into a library like abortable-promise-cache
, which provides additional features like TTL, size limits, and metrics.
The Bottom Line
Smart memoization isn’t just about performance—it’s about resilience. By distinguishing between cacheable successes and transient failures, we create applications that gracefully handle the chaotic nature of distributed systems while maintaining the performance benefits of aggressive caching.
The pattern is simple: cache the good, clear the bad. But the implementation details matter enormously for production-grade applications handling real user traffic and real network conditions.