Problem
Large binary reads in readN() and readNGrow() do chunked make([]byte, alloc) allocations bounded by bytesAllocLimit (1MB). For large objects, this creates multiple intermediate allocations that become garbage immediately.
Proposal
Use the decoder's existing buf field more aggressively for intermediate reads. Instead of append(b, make([]byte, n-len(b))...), grow d.buf once and sub-slice it. This consolidates multiple small allocations into one reusable buffer.
Files
decode.go — readN() (line ~696), readNGrow() (line ~730)
Expected Impact
MEDIUM — reduces allocation count for large binary/string values. Most impactful for workloads with large byte arrays or strings (>1KB).
Problem
Large binary reads in
readN()andreadNGrow()do chunkedmake([]byte, alloc)allocations bounded bybytesAllocLimit(1MB). For large objects, this creates multiple intermediate allocations that become garbage immediately.Proposal
Use the decoder's existing
buffield more aggressively for intermediate reads. Instead ofappend(b, make([]byte, n-len(b))...), growd.bufonce and sub-slice it. This consolidates multiple small allocations into one reusable buffer.Files
decode.go—readN()(line ~696),readNGrow()(line ~730)Expected Impact
MEDIUM — reduces allocation count for large binary/string values. Most impactful for workloads with large byte arrays or strings (>1KB).