-
Notifications
You must be signed in to change notification settings - Fork 967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test case broken by migration to axum 0.6 #1966
Comments
Did a git bisect which suggests this is the change that broke it: #1552 When I check that commit out, my repro test fails. When I check out the axum commit immediately prior and apply the below diff to my test, the test passes. diff --git a/src/main.rs b/src/main.rs
index d7825b4..ca7411f 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -76,7 +76,7 @@ mod tests {
let backend = MockBackend::default();
- let app = app(Arc::new(backend), CONCURRENCY);
+ let app = app(Arc::new(backend), CONCURRENCY).into_service();
let barrier = Arc::new(Barrier::new(10));
// Spawn 10 concurrent tasks, each of which will send a `foo` request |
Using
My understanding is that this means that we're creating a new instance of each middleware per request. We don't hit this outside tests because we call Looks like this is documented and there's a fix we can apply (we should really be moving our |
Bug Report
Version
Working in 0.5.17, broken in 0.6.0
Platform
Amazon Linux 2
Crates
axum
Description
I work on a service built on axum. It uses tower concurrency limiting and load shedding to throttle incoming requests. We have a unit test that validates that the throttling layer is doing its job properly. When I tried migrating our package to axum 0.6, this test broke.
I've constructed a minimal repro of this issue here. This includes an annotated reproduction of our failing test. But basically this is the problem:
We have a piece of state in our tests that holds an atomic counter, shared across requests as an
Extension
. The counter is incremented at the start of each request and decremented at the end, and if it ever exceeds our configured concurrency limit, we panic, as we interpret this as the concurrency limit layer not having done its job.The test sets a concurrency limit of 2 and spawns 10 threads which concurrently make requests. We verify that 2 of the requests succeeded and the remaining 8 throttled. In axum 0.5 this test works, but in axum 0.6 we instead get this:
You can see the code for more details.
What I think is happening here: we we dispatch the 10 threads, we clone our
Router
and hand a clone to each thread. It seems that in axum 0.5, cloning routers would result in several instances with shared internal state forconcurrency_limit
(due to internalArc
ing?), but in 0.6 this no longer holds: cloning the router leads to entirely separate instances. That's just my best guess though, I don't know how to verify this hypothesis.This may not be a bug in axum, but it does seem like there was a breaking change. I couldn't find anything related to this in the changelog.
The text was updated successfully, but these errors were encountered: