SDSTOR-22200#883
Conversation
The class was named BlkAllocMetrics but is exclusively used by VarsizeBlkAllocator and its slab subsystem (SlabMetrics registers to it as parent). The name is now accurate and leaves room for a separate FixedBlkAllocMetrics class.
FixedBlkAllocMetrics: new metrics class with num_alloc, num_alloc_failure counters and a blk_alloc_memory_size gauge. Wired into FixedBlkAllocator constructor; gauge is set once at construction since MPMC queue capacity is fixed (total_blks x 12B per slot: 6B BlkId + 6B folly sequence field, confirmed via GDB on 461GB index device = 10.06GB). VarsizeBlkAllocMetrics: adds the same blk_alloc_memory_size gauge, updated once after FreeBlkCacheQueue construction via the new total_slab_capacity() helper (sum of all slab entry_capacity() x 12B per slot = 1.60GB for the production data blkstore configuration).
The cache evictor's get_size() callback was returning only m_cache_size (the
raw node data buffer), but each cached entry allocates additional heap memory:
actual_cost = m_cache_size + sizeof(CacheBufferType)
+ sizeof(homeds::MemVector)
+ sizeof(homeds::MemPiece)
This missing overhead caused two interconnected bugs:
1. cache_size metric underreported actual RSS by factor of:
(m_cache_size + overhead) / m_cache_size
2. Evictor's m_cur_size tracked only m_cache_size, so when m_cur_size
approached m_max_size, actual RSS was already at:
m_max_size × (m_cache_size + overhead) / m_cache_size
This prevented eviction from triggering before OOM.
The fix adds the overhead using compile-time sizeof() for each component,
ensuring both the cache_size gauge and eviction threshold (m_cur_size vs
m_max_size) accurately reflect true memory consumption.
For typical 512B btree nodes: overhead ≈ 320B → multiplier ≈ 1.625×
For larger 4KB nodes: same absolute overhead → multiplier ≈ 1.078×
The formula scales correctly across different node sizes without hardcoding.
Add verify_node_fowarding() using std::queue for iterative tree traversal. Benefits over recursive verify_node_recursive(): - Reduces lock contention: ancestor nodes unlocked while processing descendants - Enables cache eviction during traversal: ref_count drops to 1 immediately per node - Lower peak stack depth for deep trees This does NOT solve cross-volume cache accumulation during recovery. The actual OOM fix is force_evict() after each volume completes (separate commit). Add recursive parameter (default false) to verify_tree() propagated through: - btree.hpp: verify_tree(update_debug_bm, recursive) - mapping.hpp/cpp: verify_tree delegation - volume.hpp/cpp: verify_tree delegation Dispatcher verify_node() routes to verify_node_recursive() when recursive=true, verify_node_fowarding() when recursive=false (default).
Add release_cached_tree() to evict cached btree nodes when safe: - Buffer not locked (try_lock succeeds) - Buffer not dirty (no pending writeback) - ref_count permits eviction (only cache/wb_cache hold refs) Implementation: - btree.hpp: EvictionStats struct + release_cached_tree() + release_cached_nodes_recursive() - ssd_btree.hpp: free_node_from_cache() with isSyncCall=true to avoid null deref on wb_req->bcp - mapping.hpp: delegation wrapper - volume.hpp: delegation wrapper Safety: Cache::erase (cache_only=true) correctly handles wb_cache refs - buffer removed from hash_set/eviction list while wb_cache retains ref for checkpoint flush. Use case: Call after verify_tree() during recovery to release per-volume cache before processing next volume, preventing cross-volume memory accumulation.
…lation In vol_recovery_start_phase2(), call release_cached_tree() after recovery_start_phase2() finishes for each volume. This proactively evicts clean cached btree nodes, providing: 1. Limits peak memory: max(single volume) instead of sum(all volumes) Without this fix, cache accumulates across volumes during sequential recovery. 2. Frees cache space for dirty buffers: Clean nodes from verify_tree() consume cache quota but provide zero value after recovery. Evicting them makes room for dirty buffers that need cache space for writeback/checkpoint. 3. Defensive against future bugs: Reduces OOM risk from any dirty buffer accumulation by not wasting cache on stale clean buffers. Critical ordering per volume: 1. verify_tree() - loads nodes into cache for verification 2. recovery_start_phase2() - io_replay needs nodes in cache 3. release_cached_tree() - safe to evict after io_replay completes
Add release_cache_after_recovery boolean flag to GeneralConfig in homeblks_config.fbs with default value true. This allows runtime control of cache release behavior during volume recovery phase 2. When enabled (default), cached btree nodes are safely evicted after each volume's recovery to limit peak memory usage and free cache space for dirty buffers. The flag is marked as hotswap, allowing dynamic configuration updates without restart.
Add recovery_cache_release_latency gauge to HomeBlksMetrics and m_cache_release_ms field to HomeBlksRecoveryStats to track time spent releasing cached btree nodes during volume recovery phase 2. The metric accumulates time across all volumes and logs per-volume cache release duration. This enables monitoring the performance overhead of cache eviction and helps identify if the cache release operation becomes a bottleneck during recovery.
Run ./apply-clang-format.sh to format all C++ source files according to src/.clang-format style configuration. This ensures consistent code style across the codebase.
| homestore::BlkId bid(node_id); | ||
|
|
||
| auto req = writeback_req_t::make_request(); | ||
| req->isSyncCall = true; |
There was a problem hiding this comment.
isSyncCall = true prevents a null deref on wb_req->bcp->cp_id - but that's a call-site workaround. The null check belongs in the wb layer.
| // This formula ensures cache_size metric and eviction threshold reflect true RSS cost | ||
| static constexpr uint32_t k_overhead{sizeof(CacheBufferType) + sizeof(homeds::MemVector) + | ||
| sizeof(homeds::MemPiece)}; | ||
| return cbuf->get_cache_size() + k_overhead; |
There was a problem hiding this comment.
get_size() feeds Evictor::add_record directly - this changes m_cur_size accounting for all IO paths, not just recovery. The accounting fix is correct (old code undercounted RSS), but at any configured m_max_size the cache holds ~38% fewer nodes than before. Tuned configs may need revisiting.
| * | ||
| * @return : true if this node including all its children are not corrupted; false if not | ||
| */ | ||
| bool verify_node_fowarding(bnodeid_t bnodeid, BtreeNodePtr parent_node, uint32_t indx, bool update_debug_bm) { |
There was a problem hiding this comment.
Typo: verify_node_fowarding → verify_node_forwarding
| return true; | ||
| } | ||
|
|
||
| void release_cached_nodes_recursive(bnodeid_t root_bnodeid, EvictionStats& stats) { |
There was a problem hiding this comment.
Name says recursive; implementation is iterative BFS (std::queue, no self-call). Rename to release_cached_nodes_bfs or just release_cached_nodes.
Prevent OOM during volume recovery by releasing cached btree nodes
Problem Statement
During multi-volume recovery, HomeStore experiences OOM (Out of Memory) crashes when the total cached btree nodes across all volumes exceed available memory. This occurs because:
Cross-volume cache accumulation: During
vol_recovery_start_phase2(), each volume'sverify_tree()loads all btree nodes into cache sequentially. Without eviction between volumes, total cache size = sum(all volumes).Clean node waste: After recovery completes, cached nodes from
verify_tree()provide zero value but continue consuming cache quota, leaving no room for dirty buffers that actually need writeback.Peak memory formula: With N volumes and per-node overhead of 832 bytes (512B-4096B buffer + MemVector + MemPiece + BtreeNode), peak memory during recovery reaches:
Total = N × (nodes_per_volume × (buffer_size + overhead))
Example: 13 volumes × 6.86M nodes × 832 bytes ≈ 77.3GB, exceeding typical 60GB limits.
Root Cause Analysis (GDB Session Evidence)
Extensive GDB investigation on production pod revealed:
dirty_buf_cnt=0, ruling out dirty buffer accumulationSolution
This PR implements a two-pronged approach to prevent OOM:
1. Iterative Btree Verification (Performance Improvement)
verify_node_fowarding()with iterative BFS traversal usingstd::queueBtreeNodePtrimmediately after processing each node2. Safe Cache Release After Recovery (OOM Prevention)
release_cached_tree()to safely evict btree node buffers after each volume's recoveryfree_node_from_cache()inssd_btree.hppto evict buffers with proper safety checks:try_locksucceeds)Cache::erase(cache_only=true)removes buffer from hash_set/eviction list while wb_cache retains reference for checkpoint flush3. Integration & Configuration
release_cached_tree()invol_recovery_start_phase2()AFTERrecovery_start_phase2()completes for each volumerelease_cache_after_recoveryconfig flag (default: true) with hotswap supportrecovery_cache_release_latencymetric to track performance overheadHomeBlksRecoveryStatsto log cache release timeTechnical Details
Cache Eviction Safety Checks: