Conversation
|
Documentation for this PR has been generated and is available at: https://n0-computer.github.io/noq/pr/589/docs/noq/ Last updated: 2026-04-15T06:53:44Z |
Performance Comparison Report
|
| Scenario | noq | upstream | Delta | CPU (avg/max) |
|---|---|---|---|---|
| large-single | 5711.1 Mbps | 8089.1 Mbps | -29.4% | 98.4% / 150.0% |
| medium-concurrent | 5395.6 Mbps | 8078.4 Mbps | -33.2% | 97.6% / 149.0% |
| medium-single | 3867.1 Mbps | 4836.7 Mbps | -20.0% | 92.8% / 101.0% |
| small-concurrent | 3875.3 Mbps | 5310.2 Mbps | -27.0% | 94.8% / 102.0% |
| small-single | 3597.7 Mbps | 4951.3 Mbps | -27.3% | 98.1% / 152.0% |
Netsim Benchmarks (network simulation)
| Condition | noq | upstream | Delta |
|---|---|---|---|
| ideal | 2874.7 Mbps | 4093.1 Mbps | -29.8% |
| lan | 782.4 Mbps | 810.4 Mbps | -3.5% |
| lossy | 69.8 Mbps | 69.8 Mbps | ~0% |
| wan | 83.8 Mbps | 83.8 Mbps | ~0% |
Summary
noq is 27.7% slower on average
c9216e5f4b057e4c2f30dd7c6ea3689add2e1eef - artifacts
Raw Benchmarks (localhost)
| Scenario | noq | upstream | Delta | CPU (avg/max) |
|---|---|---|---|---|
| large-single | 5433.4 Mbps | 8001.3 Mbps | -32.1% | 97.0% / 149.0% |
| medium-concurrent | 5494.0 Mbps | 7880.1 Mbps | -30.3% | 96.7% / 148.0% |
| medium-single | 3933.9 Mbps | 4749.4 Mbps | -17.2% | 99.4% / 150.0% |
| small-concurrent | 3799.4 Mbps | 5177.4 Mbps | -26.6% | 94.3% / 101.0% |
| small-single | 3600.7 Mbps | 4771.2 Mbps | -24.5% | 93.8% / 102.0% |
Netsim Benchmarks (network simulation)
| Condition | noq | upstream | Delta |
|---|---|---|---|
| ideal | N/A | 4051.3 Mbps | N/A |
| lan | N/A | 810.4 Mbps | N/A |
| lossy | N/A | 55.9 Mbps | N/A |
| wan | N/A | 83.8 Mbps | N/A |
Summary
noq is 27.2% slower on average
e4578db732f0f801426a187e9956af576327bd93 - artifacts
Raw Benchmarks (localhost)
| Scenario | noq | upstream | Delta | CPU (avg/max) |
|---|---|---|---|---|
| large-single | 5965.0 Mbps | 7681.5 Mbps | -22.3% | 97.6% / 98.9% |
| medium-concurrent | 5489.8 Mbps | 7743.6 Mbps | -29.1% | 96.6% / 98.1% |
| medium-single | 4043.3 Mbps | 4749.4 Mbps | -14.9% | 96.7% / 98.5% |
| small-concurrent | 3857.1 Mbps | 4942.4 Mbps | -22.0% | 97.5% / 99.6% |
| small-single | 3634.3 Mbps | 4824.6 Mbps | -24.7% | 97.1% / 99.1% |
Netsim Benchmarks (network simulation)
| Condition | noq | upstream | Delta |
|---|---|---|---|
| ideal | 3128.8 Mbps | 4022.8 Mbps | -22.2% |
| lan | 796.4 Mbps | 810.3 Mbps | -1.7% |
| lossy | 69.8 Mbps | 69.8 Mbps | ~0% |
| wan | 83.8 Mbps | 83.8 Mbps | ~0% |
Summary
noq is 22.5% slower on average
065e6b66685ebac3f06df57059649b74fc749eac - artifacts
No results available
06da100e6ad66bec40f86a5fc1c63230f207aee4 - artifacts
Raw Benchmarks (localhost)
| Scenario | noq | upstream | Delta | CPU (avg/max) |
|---|---|---|---|---|
| large-single | 5656.8 Mbps | 7992.6 Mbps | -29.2% | 95.0% / 100.0% |
| medium-concurrent | 5466.8 Mbps | 7750.2 Mbps | -29.5% | 93.7% / 100.0% |
| medium-single | 4068.2 Mbps | 4749.7 Mbps | -14.3% | 97.3% / 149.0% |
| small-concurrent | 3871.1 Mbps | 5192.9 Mbps | -25.5% | 100.0% / 152.0% |
| small-single | 3556.2 Mbps | 4858.9 Mbps | -26.8% | 92.5% / 103.0% |
Netsim Benchmarks (network simulation)
| Condition | noq | upstream | Delta |
|---|---|---|---|
| ideal | 3142.6 Mbps | 4058.9 Mbps | -22.6% |
| lan | 782.4 Mbps | 796.4 Mbps | -1.7% |
| lossy | 69.8 Mbps | 55.9 Mbps | +25.0% |
| wan | 83.8 Mbps | 83.8 Mbps | ~0% |
Summary
noq is 24.9% slower on average
f133084bcd67d644179cf01676aadf2af83af1b5 - artifacts
Raw Benchmarks (localhost)
| Scenario | noq | upstream | Delta | CPU (avg/max) |
|---|---|---|---|---|
| large-single | 5445.5 Mbps | 7976.5 Mbps | -31.7% | 94.3% / 99.0% |
| medium-concurrent | 5507.5 Mbps | 7919.8 Mbps | -30.5% | 95.0% / 99.5% |
| medium-single | 4176.5 Mbps | 4749.6 Mbps | -12.1% | 95.4% / 150.0% |
| small-concurrent | 3983.9 Mbps | 5259.0 Mbps | -24.2% | 100.7% / 152.0% |
| small-single | 3542.5 Mbps | 4821.8 Mbps | -26.5% | 92.7% / 101.0% |
Netsim Benchmarks (network simulation)
| Condition | noq | upstream | Delta |
|---|---|---|---|
| ideal | 3106.5 Mbps | 4058.6 Mbps | -23.5% |
| lan | 782.4 Mbps | 810.4 Mbps | -3.5% |
| lossy | 69.9 Mbps | 69.8 Mbps | ~0% |
| wan | 83.8 Mbps | 83.8 Mbps | ~0% |
Summary
noq is 25.3% slower on average
ae61bb525e3a540ec1e23d57dded5905f4472b7a - artifacts
Raw Benchmarks (localhost)
| Scenario | noq | upstream | Delta | CPU (avg/max) |
|---|---|---|---|---|
| large-single | 6018.7 Mbps | 7906.7 Mbps | -23.9% | 97.9% / 149.0% |
| medium-concurrent | 5276.1 Mbps | 7861.4 Mbps | -32.9% | 97.6% / 150.0% |
| medium-single | 4209.5 Mbps | 4529.5 Mbps | -7.1% | 93.1% / 100.0% |
| small-concurrent | 3941.6 Mbps | 5056.2 Mbps | -22.0% | 95.9% / 102.0% |
| small-single | 3633.6 Mbps | 4671.1 Mbps | -22.2% | 97.6% / 152.0% |
Netsim Benchmarks (network simulation)
| Condition | noq | upstream | Delta |
|---|---|---|---|
| ideal | 3114.2 Mbps | 4064.9 Mbps | -23.4% |
| lan | 796.1 Mbps | 810.3 Mbps | -1.8% |
| lossy | 69.9 Mbps | 69.8 Mbps | ~0% |
| wan | 83.8 Mbps | 83.8 Mbps | ~0% |
Summary
noq is 22.6% slower on average
|
sigh. |
e4578db to
065e6b6
Compare
#590) ## Description This fixes a bug uncovered in #589 When we initiate a NAT traversal round with an address we already have a path for, we re-use that path, but re-trigger sending an on-path challenge for that path. Once we eventually send the PATH_CHALLENGE, we arm a `PathChallengeLost` timer to allow re-sending it. This will then loop indefinitely, but usually only until the `PathChallengeLost` timer and challenge is cleared by either the challenge arriving or by the `PathOpenFailed` timer triggering. However, the `PathOpenFailed` timer is not armed in the NAT traversal case for revalidation, as we're not opening the path, we're just re-validating it. So unless revalidation works, this path will be broken forever and keep looping with path challenges until it receives a response. This fixes revalidation by also arming the `PathOpenFailed` timer when we re-validate. To ensure we correctly handle that timer, this adds a `OpenStatus::Revalidation` case. I also renamed `PathOpenFailed` to `PathTimer::AbandonFromValidation` because it now serves a slightly different purpose. ## Notes & open questions A test for this exists in #589, but it relies on a fixed proptest setup, which that PR addresses. I wanted to pull out the bugfix to its own PR though. ## Change checklist <!-- Remove any that are not relevant. --> - [x] Self-review.
06da100 to
f133084
Compare
## Description This fixes some bugs found via #585. When a connection doesn't support multipath, or it receives a malicious or misconstructed PATH_ACK frame that has e.g. a `path_id` of `1`, which the current connection may not actually have any state for, this would previously cause us to panic. With this change we exit out with a PROTOCOL_VIOLATION connection close instead. ## Notes & open questions I don't have a regression test for this yet. It is generated in #585 and uses the test setup from #589, which both aren't merged yet. I'll make sure to re-generate the regression test and add it in #585 once that's ready. ## Change checklist <!-- Remove any that are not relevant. --> - [x] Self-review. - [x] Documentation updates following the [style guide](https://rust-lang.github.io/rfcs/1574-more-api-documentation-conventions.html#appendix-a-full-conventions-text), if relevant.
flub
left a comment
There was a problem hiding this comment.
apologies. i shouldnt have commented on a email i saw, but now i felt obliged reviewing the whole thing. didn't realise it was ready for review before..
| enum Extensions { | ||
| None, | ||
| MultipathOnly, | ||
| QntAndMultipath, |
There was a problem hiding this comment.
Want to throw in ack-frequency for extra chaos? perhaps not yet...
(I'm assuming this is currently always enabled, just like the datagram extension is probably always enabled but I don't think proptest ever send datagrams?)
There was a problem hiding this comment.
No datagrams in proptests. But I have a branch locally where we do! (and I don't remember much bad things happening from enabling them) :)
We can throw in extra things in the future, totally up for that.
They should then probably be put next to multipath and QNT extensions, as those are the only ones that depend on each other IIRC.
Previously I had:
struct Extensions {
multipath: bool,
qnt: bool,
}But then proptests would generate multipath: false, qnt: true, as the "minimal" value, but obviously that's misleading because this would enable multipath!
Anyhow - we can add more extensions later, it's generally a good idea. Enabling/disabling MTUD is also something I'd be really excited to add here, since that's something that has generated test failures a lot in the past and seeing at a glance that it's required for a test failure is going to be useful!
There was a problem hiding this comment.
Turned the branch I had locally into a draft PR: #593
Description
Now depends on #590 merging first so the added test coverage actually passes.
This replaces three different proptest functions (
random_interaction,random_interaction_with_multipath_simple_routingandrandom_interaction_with_multipath_complex_routing) with a singlerandom_interactionfunction with aPairSetupproptest parameter that effectively generates the different proptest cases these functions generate (and more!).This also:
Arbitraryimpl forArrayRangeSetthat required filtering out emptyArrayRangeSets after the fact instead of generating them without them possibly being empty in the first placeChange checklist