Skip to content

perf: readN/readNGrow buffer reuse for large reads #63

@xe-nvdk

Description

@xe-nvdk

Problem

Large binary reads in readN() and readNGrow() do chunked make([]byte, alloc) allocations bounded by bytesAllocLimit (1MB). For large objects, this creates multiple intermediate allocations that become garbage immediately.

Proposal

Use the decoder's existing buf field more aggressively for intermediate reads. Instead of append(b, make([]byte, n-len(b))...), grow d.buf once and sub-slice it. This consolidates multiple small allocations into one reusable buffer.

Files

  • decode.goreadN() (line ~696), readNGrow() (line ~730)

Expected Impact

MEDIUM — reduces allocation count for large binary/string values. Most impactful for workloads with large byte arrays or strings (>1KB).

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance optimization

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions