Skip to content

feat: use provided maxConcurrentStreams as multiplexing factor#4158

Open
slagiewka wants to merge 1 commit intonodejs:mainfrom
slagiewka:max_concurrent_streams_as_multiplexing
Open

feat: use provided maxConcurrentStreams as multiplexing factor#4158
slagiewka wants to merge 1 commit intonodejs:mainfrom
slagiewka:max_concurrent_streams_as_multiplexing

Conversation

@slagiewka
Copy link

@slagiewka slagiewka commented Apr 13, 2025

This relates to...

Closes #4143.

Rationale

For H2 sessions pipelining is misleading and is a concept of HTTP/1.1.

This makes use of maxConcurrentStreams parameter to be equivalent of what H2 calls multiplexing. It makes sure that clients do not have to change pipelining in order to achieve multiplexing over a H2 stream.

Changes

Features

  • Allow controlling multiplexing over a stream without increasing pipelining.

Bug Fixes

Breaking Changes and Deprecations

  • None, unless users defined maxConcurrentStreams themselves.

Status

Footnotes

  1. This makes undici (non-fetch) results way closer to "native - http2" under a single connection. It is worth noting that any benchmarks running on 2+ connections are unfair to "native" as it has a single connection open.

Copy link
Author

@slagiewka slagiewka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't touch the H2CClient yet, but it might be addressed as well. Granted, this has lesser priority as it already defaults pipelining to be equal maxConcurrentStreams.

Comment on lines 71 to 73
if (client[kHTTPContext]?.version === 'h2') {
return client[kMaxConcurrentStreams] ?? client[kHTTPContext]?.defaultPipelining ?? 100
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What remains unclear to me is under which circumstances getPipelining can be called with a client that has no kHTTPContext present.

In any case, this tries to figure out whether we're using H2 and short circuit on that decision.

The statement below acts as a fallback but I'm almost 100% sure that client[kPipelining] always wins (is never nullish) since a client is always created with on in its constructor. However, the setter for pipelining COULD make it nullish (for some reason) and only then it would fall to the context's setting.

@slagiewka slagiewka marked this pull request as ready for review April 13, 2025 06:55
Copy link
Member

@metcoder95 metcoder95 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are not yet preparing a new major version, so we will need to make it backwards compatible with a warning that states the change;

We can follow the path of putting maxCurrentStreams in higher order of priority compared to pipelining for h2, but still honor pipelining if maxConcurrentStreams is not passed so we keep do not introduce a breaking change (just yet)

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work! Can you fix the tests?

Copy link
Member

@metcoder95 metcoder95 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, just minor comment

const noop = () => {}

function getPipelining (client) {
if (client[kHTTPContext]?.version === 'h2' && client[kMaxConcurrentStreamsManual]) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just add a deprecation warning if maxConcurrentStreams is not set, and we should be good to go

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When would that happen? The symbol (which name I don't like yet 🙈 ) that I'm using is only set if maxConcurrentStreams is set (line 240).

Or your suggestion is to notify everyone using allowH2 but not setting it - therefore missing the benefit of multiplexing?

I'm also not sure WHEN that deprecation warning would happen. I assume that somewhere in Client constructor. Otherwise the noise would be significant.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking to add it only when the kMaxConcurrentStreamsManual is set to false but pipeline was altered instead; so the warning states that in further major pipeline will not take the effect of maxConcurrentStreams

@slagiewka slagiewka changed the title feat!: use maxConcurrentStreams as multiplexing factor feat: use provided maxConcurrentStreams as multiplexing factor Apr 24, 2025
@slagiewka
Copy link
Author

One comment from me: with the original change enabling 100 concurrent streams for the entire codebase might have uncovered some issues when multiplexing is in use. Tests might be indication of that.
I'm not yet versed well enough with the internals, but that's a yellow flag to me.

@metcoder95
Copy link
Member

One comment from me: with the original change enabling 100 concurrent streams for the entire codebase might have uncovered some issues when multiplexing is in use. Tests might be indication of that. I'm not yet versed well enough with the internals, but that's a yellow flag to me.

Haven't run the benchmarks yet (might do it Today); but what's have you see?

@slagiewka
Copy link
Author

Haven't run the benchmarks yet (might do it Today); but what's have you see

It's actually not in benchmarks but in tests:

test at test/http2.js:1247:1
✖ #2364 - Concurrent aborts (111.166097ms)
  Error [ERR_HTTP2_INVALID_STREAM]: The stream has been destroyed
      at ServerHttp2Stream.respond (node:internal/http2/core:2786:13)
      at Timeout._onTimeout (/home/runner/work/undici/undici/test/http2.js:1252:14)
      at listOnTimeout (node:internal/timers:581:17)
      at process.processTimers (node:internal/timers:519:7) {
    code: 'ERR_HTTP2_INVALID_STREAM'
  }

test at test/http2.js:1:1

Which is 100% reproducible for me when pipelining == default maxConcurrentStreams (original commit) and does not show up otherwise (current state and on main).

At first I thought that maybe there's something wrong with how "destroy" happens on a stream once request gets aborted and instead the session gets hit. But the error happens on the mock server in this test (trying to respond) - haven't looked yet into why that may happen with many streams vs a single stream.

@metcoder95
Copy link
Member

metcoder95 commented Apr 25, 2025

I've been running benchmarks with different combinations and I'm finding a strong correlation between the pipelining (parallel requests send through the same session within undici) and the number of max concurrent streams available; having a negative impact on performance if both increased in the same direction (both increased and close to each other).

See following table

  • maxConcurrentStreams = 100
  • pipelining = 50
image

in contrast with

  • maxConcurrentStreams = 100
  • pipelining = 100
image

The results are consistent between executions, feel free to play with the combination, but the closer the pipelining is to maxConcurrentStreams, the worst the performance becomes.

Given the results and findings, I'll suggest to re-evaluate the change and, instead of keeping them in sync, maintain pipelining as a way to handle concurrent requests within undici while maxConcurrentStreams maintains keeping the stream management independent.

If we want to be explicit, we can add a further option property (e.g. multiplexing) that states the max number of concurrent requests while maxConcurentStreams remains applying to the same concept (although I believe pipelining can keep it with just extra documentation added).

One thing that we need to establish for sure is that pipelining cannot be greater than maxConcurrentStreams.

For H2 sessions pipelining is misleading and is a concept of HTTP/1.1.

This makes use of `maxConcurrentStreams` parameter to be equivalent of
what H2 calls multiplexing. It makes sure that clients do not have to
change `pipelining` in order to achieve multiplexing over a H2 stream.

Closes nodejs#4143.
@slagiewka slagiewka force-pushed the max_concurrent_streams_as_multiplexing branch from a261269 to bc5c5f6 Compare February 14, 2026 09:39
@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.18%. Comparing base (0a23610) to head (bc5c5f6).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4158      +/-   ##
==========================================
- Coverage   93.19%   93.18%   -0.01%     
==========================================
  Files         109      109              
  Lines       34222    34229       +7     
==========================================
+ Hits        31893    31897       +4     
- Misses       2329     2332       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tests are needed

@slagiewka
Copy link
Author

The results are consistent between executions, feel free to play with the combination, but the closer the pipelining is to maxConcurrentStreams, the worst the performance becomes.

Hey @metcoder95

I've found multiple problems with these benchmarks in general:

  • Pipelining is never used with this change. I've manually hooked into all the places that would set or get the value and it's never used. Therefore I'm baffled why it would impact benchmark results in any way. It doesn't for me.
  • maxConcurrentStreams is effectively no-op:
    • test server does not use any explicit settings -> its maxConcurrentStreams defaults to 2^32 -1
    • undici is instructed to always overwrite it with remoteSettings (granted, this seems to have been added in the meantime).
  • when I patch the server to limit itself the the same value (from benchmark env) we're actually using limited streams
  • however, I could find no reasonable setting that would tell me what is the optimal connections + streams config that gives the best performance
  • in no configuration I could say that HTTP/2 performance is on par with HTTP/1.1. Maybe some profiling is in order to figure it out.

As a bonus, the "native" pass will be in a disadvantage since it's always 1 connection (session). Maybe sticking to 1 connection/session for all of them and controlling number of streams would give every pass a fair playing ground.

If we want to be explicit, we can add a further option property (e.g. multiplexing) that states the max number of concurrent requests while maxConcurentStreams remains applying to the same concept (although I believe pipelining can keep it with just extra documentation added).

I'm having doubts about such an explicit approach.

  1. The config is already quite loaded
  2. As a user I would be having hard time trying to figure out the difference between multiplexing, maxConcurrentStreams and pipelining.
  3. pipelining controlling multiplexing is what I wanted to avoid in the first place. My goal here was to allow HTTP/2 use concurrency allowed by the design while not allowing HTTP/1.1 to "queue" more than 1 request per connection. I might be mistaken here, but my understanding is that users need to be careful with pipelining > 1 over HTTP/1.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Default pipelining to maxConcurrentStreams with allowH2

4 participants

Comments