Skip to content

perf: pool map allocations in decode path #61

@xe-nvdk

Description

@xe-nvdk

Problem

Every map decode allocates a fresh map via make(map[string]string, n) or make(map[string]interface{}, n) or reflect.MakeMapWithSize(). Under sustained high-throughput decode, these map allocations generate significant GC pressure.

Proposal

Use sync.Pool of pre-allocated maps, cleared with clear() (Go 1.21+) before reuse. This is tricky because the caller retains the map, so pooling only works if the caller explicitly returns it (opt-in API) or if we pool the intermediate decode buffer.

Alternative: provide a UnmarshalReuse(data []byte, v interface{}) API that reuses existing maps/slices in v instead of allocating new ones.

Files

  • decode_map.godecodeMapStringStringValue, decodeMapStringInterfaceValue, decodeMapDefault

Expected Impact

MEDIUM-HIGH — eliminates 1 map allocation per map decode. Impact depends on map frequency in workload.

Notes

Requires careful design — maps returned to callers can't be pooled without an explicit return mechanism. Consider a Decoder.UsePreallocateValues(true) extension or reuse of existing map values.

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance optimization

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions