Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow resetting of congestion controller state #1842

Closed
wants to merge 1 commit into from

Conversation

flub
Copy link
Contributor

@flub flub commented May 2, 2024

This is a bit of a push, and I appreciate this exposes some innards from Quinn which you may not want to see exposed. Nevertheless I'm curious what you think of this. Is this something that at all even reasonable to do? And would there be a shape of this API that you could consider including?

This allows resetting of the congestion controller state from outside of Quinn. To do so in introduces a low-level handle to the InnerConnection which does not itself keep the connection alive.

This is useful when you have out-of-band information about the underlying network path and you know the network path has changed. This can occur when using a custom AsyncUdpSocket for example. In this case it can speed up reaching accurate latency estimates and thus improve throughput.

This allows resetting of the congestion controller state from outside
of Quinn.  To do so in introduces a low-level handle to the
InnerConnection which does not iself keep the connection alive.

This is useful when you have out-of-band information about the
underlying network path and you know the network path has changed.
This can occur when using a custom AsyncUdpSocket for example.  In
this case it can speed up reaching accurate latency estimates and thus
improve throughput.
@djc
Copy link
Collaborator

djc commented May 2, 2024

This API looks pretty cumbersome. Maybe this should be some new method on the AsyncUdpSocket trait, or a bit of state returned from one of its methods?

(Or, as mentioned in another PR, maybe iroh should skip the quinn interface directly.)

@Ralith
Copy link
Collaborator

Ralith commented May 2, 2024

I think the underlying motivation here makes sense. There's a number of reasonable scenarios in which a network path might observably undergo drastic change, and in which the application might be uniquely positioned to detect this. Very similar logic is specced around migration:

On confirming a peer's ownership of its new address, a [server] endpoint MUST immediately reset the congestion controller and round-trip time estimator for the new path to initial values

// Reset rtt/congestion state for new path unless it looks like a NAT rebinding.

We don't currently do the same thing when actively migrating as a client, but it seems obvious that we should. Once we do, then I think this PR's use case is addressed by the application using the Quinn API to migrate to a new address, i.e. calling Endpoint::rebind. As of #1800 we already pipe quinn::Endpoint::rebind calls down to quinn_proto::Conneciton::local_address_changed, and I suspect it makes sense to expand the work done there to include state resets similar to those performed in Connection::migrate, probably omitting path validation. We should consider including a flag to indicate whether the path is believed to have changed or not, since e.g. NAT rebinding shouldn't prompt state resets, though it's not obvious if those cases will ever merit explicit handling at the application layer.

@djc
Copy link
Collaborator

djc commented May 2, 2024

Right, that makes sense.

@flub
Copy link
Contributor Author

flub commented May 10, 2024

While I agree that the client should also try and reset its congestion controller state on migration I don't think the Endpoint::rebind API would work for out situation. It affects all connections, which is a very big hammer as we only want to affect a single connection.

It's fine if you'd like to close this, I did not expect inclusion as-is. I was mostly curious what your thoughts were on doing this kind of thing. As @djc suggested, we're looking at moving away from the magic socket approach and instead try and do all the work inside Quinn itself. For this we'd have to make Quinn aware of multiple paths per connection and then each path would by itself have all the state in PathData. Which I think is a much more sustainable approach.

@Ralith
Copy link
Collaborator

Ralith commented May 10, 2024

It affects all connections, which is a very big hammer as we only want to affect a single connection.

You could hack around this by using one endpoint per outgoing connection, though that might have other drawbacks.

More broadly, I'm open to the idea of exposing connection-granularity active migration. Quinn already has a notion of a connection-specific local IP, used with incoming connections to ensure that an endpoint bound to a wildcard address can respond correctly when the host has multiple IPs on the same subnet (common on the IPv6 internet). We could reasonably expand that to outgoing connections, exposing something like Connection::set_local_address(IpAddr). This should be a fairly small and straightforward change.

For this we'd have to make Quinn aware of multiple paths per connection and then each path would by itself have all the state in PathData. Which I think is a much more sustainable approach.

This would be really cool as well, and compatible with the above.

@Ralith
Copy link
Collaborator

Ralith commented May 14, 2024

Closing since this specific direction doesn't seem ideal, but feel free to open an issue to discuss pursuit of the other options we covered!

@Ralith Ralith closed this May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants