fix(status): measure bandwidth at post-send, not at enqueue#466
fix(status): measure bandwidth at post-send, not at enqueue#466
Conversation
Per-client bandwidth on the status page was tracking the rate at which upstream data was queued into the client send buffer, not the rate at which the kernel actually delivered it. For slow clients, this stayed pinned at the upstream supply rate until the queue hit HWM and backpressure kicked in — overstating real client throughput. Move the byte counter from the three upstream-receive sites in stream_handle_fd_event() to inside connection_handle_write()'s zerocopy_send() loop. The counter now advances only when the kernel TCP stack accepts bytes, so the 1s tick in stream_tick() reports the real client receive rate even before the queue fills. Side benefit: total_bytes_sent_cumulative no longer over-counts bytes that were enqueued but lost on disconnect. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
Azure Static Web Apps: Your stage site is ready! Visit it here: https://thankful-water-0a297bf00-466.eastasia.1.azurestaticapps.net |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: eaa129b097
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
This PR moves client traffic accounting from upstream receive time to the client socket write path so /status reports bytes/bandwidth based on what the server actually manages to send. In the rtp2httpd codebase, that directly affects per-client status metrics and disconnected-client totals.
Changes:
- Remove enqueue-time
total_bytes_sentupdates from RTSP/HTTP-proxy receive paths insrc/stream.c. - Add post-send
total_bytes_sentupdates inconnection_handle_write()afterzerocopy_send(). - Aim to make slow-client bandwidth reporting match actual downstream delivery.
Review findings:
- Blocking: multicast/FCC paths still increment
total_bytes_sentat enqueue time elsewhere, so this new write-path counter double-counts those stream types. - No automated regression coverage appears to validate
/statusbytesSent/currentBandwidthbehavior.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/stream.c | Removes queue-time byte accounting from RTSP and HTTP proxy event handlers. |
| src/connection.c | Adds write-time byte accounting in the shared client send loop. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
The previous commit only removed enqueue-time `total_bytes_sent` increments from stream.c, missing four equivalent sites in fcc_handle_unicast_media(), fcc_handle_mcast_active() (×2), and mcast_session_handle_event(). For multicast/FCC streams, bytes were being counted at both enqueue (these sites) and post-send (the new connection.c site), so the displayed bandwidth was roughly doubled (reported ~17 Mbit/s for an 8 Mbit/s stream). Drop the four leftover increments so post-send is the sole counter. The local `flushed_bytes` accumulator in fcc_handle_mcast_active() is preserved — it still drives the diagnostic "Flushed pending buffer chain" log line. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
|
Azure Static Web Apps: Your stage site is ready! Visit it here: https://thankful-water-0a297bf00-466.eastasia.1.azurestaticapps.net |
1 similar comment
|
Azure Static Web Apps: Your stage site is ready! Visit it here: https://thankful-water-0a297bf00-466.eastasia.1.azurestaticapps.net |
45a919f to
7f591b9
Compare
|
Azure Static Web Apps: Your stage site is ready! Visit it here: https://thankful-water-0a297bf00-466.eastasia.1.azurestaticapps.net |
Summary
Per-client bandwidth on the status page tracked the rate at which upstream data was queued into the client send buffer, not the rate at which it was actually delivered to the client. For slow clients, the displayed bandwidth stayed pinned at the upstream supply rate until the queue hit HWM and backpressure kicked in — overstating true client throughput.
Fix
Move the byte counter from the three upstream-receive sites in
stream_handle_fd_event()(src/stream.c) to insideconnection_handle_write()'szerocopy_send()loop (src/connection.c). The counter now advances only when the kernel TCP stack accepts bytes, so the 1s tick instream_tick()reports the real client receive rate even before the send queue fills.When the client is slow,
sendmsg()starts returningEAGAINonce the kernel TCP send buffer is full, andbytes_sentstays at 0 until the receiver acks more data — making the displayed bandwidth converge to the actual link rate. For fast clients (kernel buffer never fills), behavior is unchanged.Side benefit:
total_bytes_sent_cumulativeandclient_bytes_cumulativeno longer over-count bytes that were enqueued but lost on disconnect.Test plan
cmake --build build, no warnings)tc qdisc add dev lo root tbf rate 1mbit ...), connect a normalcurlto a stream, confirm status-page bandwidth converges to ~1 Mbit/s rather than upstream ratee2e/run-e2e.sh