Resource Management
RunCache provides several features to help you effectively manage memory usage and ensure proper cleanup of resources. This guide explains how to optimize resource usage and implement proper cleanup in your applications.
Understanding Resource Management
Effective resource management in RunCache involves:
Controlling memory usage through cache size limits
Implementing appropriate eviction policies
Ensuring proper cleanup of timers and event listeners
Managing application lifecycle events
Optimizing cache performance
Cache Size Management
Setting Maximum Cache Size
You can limit the maximum number of entries in the cache using the maxEntries
configuration option:
When the cache reaches the maximum size, the configured eviction policy determines which entries are removed to make space for new ones.
Available Eviction Policies
RunCache supports three eviction policies:
NONE: No automatic eviction (default). Cache entries are only removed via TTL or manual deletion.
LRU (Least Recently Used): Removes the least recently accessed entries when the cache exceeds its maximum size. This is ideal for most applications as it preserves frequently accessed data.
LFU (Least Frequently Used): Removes the least frequently accessed entries when the cache exceeds its maximum size. When entries have the same access frequency, the oldest entry is removed first. This is useful for applications where access frequency is more important than recency.
Monitoring Evictions
You can monitor evictions by setting up a custom middleware:
Memory Usage Optimization
Using TTL for Automatic Cleanup
Set appropriate TTL (Time-to-Live) values to automatically remove entries that are no longer needed:
Manual Cleanup
For immediate cleanup of specific entries or groups of entries:
Optimizing Value Size
Minimize the size of cached values:
Timer Management
RunCache uses timers for TTL expiration and automatic refetching. These are managed automatically, but it's important to understand how they work.
How Timers Work in RunCache
TTL Timers: RunCache uses lazy expiration, checking if entries have expired only when they're accessed. This avoids the overhead of maintaining timers for every entry.
Refetch Timers: For entries with
autoRefetch: true
, RunCache sets up timers to trigger background refreshes when the TTL expires.
Cleaning Up Timers
When you delete a cache entry or flush the cache, RunCache automatically cleans up any associated timers:
Event Listener Management
As your application grows, you might accumulate many event listeners. It's important to clean them up when they're no longer needed.
Removing Event Listeners
Best Practices for Event Listeners
Middleware Management
Middleware functions can accumulate over time and impact performance. Clean them up when they're no longer needed:
Application Lifecycle Management
Automatic Cleanup on Termination
RunCache automatically registers handlers for SIGTERM and SIGINT signals in Node.js environments to ensure proper cleanup when the application is shutting down:
These handlers perform a complete shutdown of RunCache, cleaning up all resources.
Manual Shutdown
You can also manually trigger a complete shutdown of RunCache:
The shutdown
method:
Clears all cache entries
Cancels all timers
Removes all event listeners
Resets the cache configuration to default values
This is particularly useful in long-running applications or when you need to release resources manually.
Persistent Storage Management
When using persistent storage adapters, it's important to manage storage resources effectively:
Storage Cleanup
To clean up persistent storage:
Performance Optimization Strategies
1. Use Appropriate TTL Values
Match TTL values to data volatility:
2. Implement Staggered Expiration
Add small random variations to TTL to prevent mass expiration:
3. Batch Related Operations
Group related cache operations to minimize overhead:
4. Use Structured Key Naming
Adopt a consistent key naming convention for efficient pattern matching:
5. Implement Cache Warming
Pre-populate critical cache entries on application startup:
Memory Leak Prevention
1. Clean Up Event Listeners
2. Avoid Reference Cycles
Be careful with metadata that might create reference cycles:
3. Monitor Memory Usage
Implement memory usage monitoring:
Next Steps
Now that you understand resource management, explore these related topics:
Eviction Policies - Learn more about cache eviction strategies
TTL and Expiration - Understand time-to-live functionality
Persistent Storage - Learn about saving cache data across application restarts
Middleware - Explore how to intercept and transform cache operations
Last updated