Skip to content

refactor(proto): Split up computing the required pacer delay from updating the pacer#595

Open
matheus23 wants to merge 2 commits intomainfrom
matheus23/split-pacer
Open

refactor(proto): Split up computing the required pacer delay from updating the pacer#595
matheus23 wants to merge 2 commits intomainfrom
matheus23/split-pacer

Conversation

@matheus23
Copy link
Copy Markdown
Member

Description

  • Splits the previous Pacer::delay(&mut self, ..) into a Pacer::delay(&self, ..) and Pacer::update(&mut self, ..).
  • Splits PathData::pacer_delay into update_pacer and pacer_delay.
  • Makes path_congestion_check functionally pure: It does the check, but doesn't modify anything. This means you now need to:
    • Call PathData::update_pacer before you do the path_congestion_check
    • Potentially arm the PathTimer::Pacing yourself, if you were PathBlocked::Pacing

Notes & open questions

This helps with #573: We can update the pacer once, then ask whether the pacer would allow a "normal" packet, but if we don't actually end up sending anything, we fall through and later ask whether a full MTUD probe would be OK.
The answer to these two questions can be different.

In the future I'm sure we can improve beyond that by doing the pacer update further up in poll_transmit_on_path instead of poll_transmit_path_space.

Change checklist

  • Self-review.
  • Documentation updates following the style guide, if relevant.

Also split up `PathData::pacing_delay` into `update_pacer` and `pacing_delay`

Make CI happy
@matheus23 matheus23 requested a review from flub April 16, 2026 08:10
@matheus23 matheus23 self-assigned this Apr 16, 2026
@github-actions
Copy link
Copy Markdown

Documentation for this PR has been generated and is available at: https://n0-computer.github.io/noq/pr/595/docs/noq/

Last updated: 2026-04-16T08:13:13Z

@github-actions
Copy link
Copy Markdown

Performance Comparison Report

fd322b8a0a844edbbbd776aef9b403cb41195057 - artifacts

Raw Benchmarks (localhost)

Scenario noq upstream Delta CPU (avg/max)
large-single 5420.2 Mbps 8020.4 Mbps -32.4% 96.2% / 148.0%
medium-concurrent 5366.2 Mbps 7920.0 Mbps -32.2% 95.4% / 101.0%
medium-single 4175.0 Mbps 4687.6 Mbps -10.9% 98.6% / 149.0%
small-concurrent 3901.1 Mbps 5376.2 Mbps -27.4% 94.3% / 102.0%
small-single 3564.2 Mbps 4827.7 Mbps -26.2% 94.2% / 102.0%

Netsim Benchmarks (network simulation)

Condition noq upstream Delta
ideal N/A 4008.9 Mbps N/A
lan N/A 810.4 Mbps N/A
lossy N/A 55.9 Mbps N/A
wan N/A 83.8 Mbps N/A

Summary

noq is 27.3% slower on average

Comment on lines 50 to 126
@@ -69,25 +97,18 @@ impl Pacer {
self.last_mtu = mtu;
}

// if we can already send a packet, there is no need for delay
if self.tokens >= bytes_to_send {
return None;
}

// we disable pacing for extremely large windows
if window > u64::from(u32::MAX) {
return None;
}

let window = window as u32;
let Ok(window) = u32::try_from(window) else {
return self;
};

let time_elapsed = now.checked_duration_since(self.prev).unwrap_or_else(|| {
warn!("received a timestamp early than a previous recorded time, ignoring");
Default::default()
});

if smoothed_rtt.as_nanos() == 0 {
return None;
return self;
}

let elapsed_rtts = time_elapsed.as_secs_f64() / smoothed_rtt.as_secs_f64();
@@ -100,19 +121,7 @@ impl Pacer {
self.prev = now;
}

// if we can already send a packet, there is no need for delay
if self.tokens >= bytes_to_send {
return None;
}

let unscaled_delay = smoothed_rtt
.checked_mul((bytes_to_send.max(self.capacity) - self.tokens) as _)
.unwrap_or(Duration::MAX)
/ window;

// divisions come before multiplications to prevent overflow
// this is the time at which the pacing window becomes empty
Some((unscaled_delay / 5) * 4)
self
}
}
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because this splitting up change is somewhat tricky, I've written a proptest that tries its best to verify that the new version is equivalent to the old version: https://github.com/n0-computer/noq/blob/matheus23/pacer-split-up-proptest/noq-proto/src/connection/pacing.rs#L415-L443

They don't end up in the exact same state always (there's especially once case where the old delay would leave the state unchanged, because it early-exists with knowledge of bytes_to_send, whereas the new update can't know that and potentially updates the amount of tokens available).

However, what I have seems to pass all tests and do OK.

Copy link
Copy Markdown
Collaborator

@flub flub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, though I do have reservations about introducing Timer::set_if_earlier. It almost feels like it should be an invariant instead that it should not have been set yet - at least in the pacer's case.

Comment on lines +1900 to +1901
/// - When this function returns [`PathBlocked::Pacing`], and you decide to actually send
/// the given packet, call [`TimerTable::set_if_earlier`] with [`PathTimer::Pacing`]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and you decide to actually send the given packet

That reads weird. Could you rephrase it? Maybe "to wait for the pacing delay, ..."?

}

/// Sets the timer, but only if the timer was unset or was set to a later value before.
pub(super) fn set_if_earlier(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious, when did you run into situations that a pacing timer was already set when you are trying to set a new one? It seems kind of weird to have this function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: 🚑 Needs Triage

Development

Successfully merging this pull request may close these issues.

2 participants