Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider more aggressive GSO batching #1835

Open
Ralith opened this issue Apr 26, 2024 · 0 comments
Open

Consider more aggressive GSO batching #1835

Ralith opened this issue Apr 26, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@Ralith
Copy link
Collaborator

Ralith commented Apr 26, 2024

Pacing is applied in Connection::poll_transmit, before assembling a new packet:

// Check whether the next datagram is blocked by pacing
let smoothed_rtt = self.path.rtt.get();
if let Some(delay) = self.path.pacing.delay(
smoothed_rtt,
bytes_to_send,
self.path.current_mtu(),
self.path.congestion.window(),
now,
) {
self.timers.set(Timer::Pacing, delay);
congestion_blocked = true;
// Loss probes should be subject to pacing, even though
// they are not congestion controlled.
break;
}

Here bytes_to_send accounts for at most two packets: any previous packet in the current GSO batch, and the potential next packet. If transmit rate is pacing-limited, that means we're waking up every time there's capacity for 1-2 additional packets, which severely limits GSO batch size, significantly increasing CPU overhead.

It's not obvious how often we're pacing-limited. If we're sending at the path's full capacity, then the only time we're not congestion-limited is after receiving an ACK that frees up some congestion window space. Because the pacing token bucket refills slightly faster than one cwnd per RTT, we should expect that ACKs, on average, free up less cwnd space than has been made available by the pacer in the period since the last ACK, except when that exceeds the maximum pacing burst size. If ACKs are sufficiently infrequent, then we should expect to observe frequent batches of min(burst size, GSO batch size) packets, followed by a trickle of 1-2 packet batches until the cwnd is refilled or the next ACK is received.

On the other hand, if ACKs are delivered frequently, the congestion window might prevent us from forming large GSO batches regardless of pacing. We should explore how much batching we see in practice, and consider delaying transmits until pacing and congestion control permit a larger GSO batch size.

@Ralith Ralith added the enhancement New feature or request label Apr 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant