Skip to content

feat(perf): tune aggressive compression with smoothness guards#3

Open
qgkfr8w5jq-cyber wants to merge 1 commit intoRMS-Server:1.21.6from
qgkfr8w5jq-cyber:codex/find-details-on-26.1/26.1.1-using-java-25
Open

feat(perf): tune aggressive compression with smoothness guards#3
qgkfr8w5jq-cyber wants to merge 1 commit intoRMS-Server:1.21.6from
qgkfr8w5jq-cyber:codex/find-details-on-26.1/26.1.1-using-java-25

Conversation

@qgkfr8w5jq-cyber
Copy link
Copy Markdown

No description provided.

@Trirrin
Copy link
Copy Markdown
Member

Trirrin commented Apr 6, 2026

Review

A few things need explanation before this can move forward.

Java 25 — Why?

-def targetJavaVersion = 21
+def targetJavaVersion = 25

MC 1.21.6 targets Java 21. This change breaks build compatibility for everyone not running a Java 25 toolchain. It's also completely unrelated to the PR title ("tune aggressive compression with smoothness guards"). Was this intentional or accidental? Either way it needs to be dropped.

Default values — do you have benchmarks?

Parameter Old New
compressionLevel 6 10
contextLevel 23 25
aggregationMinBatchPackets 4 6
aggregationMaxExtraCycles 2 1
  • compressionLevel 6→10 significantly increases CPU cost per packet. contextLevel 23→25 quadruples per-connection memory (8MB→32MB). What testing showed these are better defaults?
  • minBatchPackets raised to 6 but maxExtraCycles dropped to 1 — these work against each other. Higher batch threshold + shorter wait = more packets sent unbatched. What's the rationale?

Config getter in hot path

The old code used static final constants for MIN_BATCH_PACKETS and MAX_EXTRA_CYCLES. Now NotEnoughBandwidthConfig.get() (which calls ConfigHelper.getConfigRead()) runs every 50ms flush tick and every write() call. On a server with many connections this adds up. If these need to be configurable, cache the values rather than calling get() on every invocation.

chunkCacheCompressionLevel upper bound

The clamp allows up to 19. Zstd level 19 is extremely slow — chunk cache writes happen on the player's receive path, so high levels will cause noticeable stutter. Consider capping at 12, or at minimum document the cost of high values.

Compression savings check — good idea, needs cleanup

The logic to check whether compression actually saves enough bytes before using it is sound, and the wire format stays compatible. However, the shouldTryCompress=false path and the shouldTryCompress=true && useCompressed=false path produce identical output — they could be unified to reduce nesting.

@Trirrin Trirrin added invalid This doesn't seem right question Further information is requested labels Apr 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

invalid This doesn't seem right question Further information is requested

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants