-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(ava 3.0.0-beta.2) per test timeout settings not being honored #2365
Comments
Is there a reason the timeout behavior was inverted? The new behavior of low-timeout by default is going to cause me much headache on |
AVA now defaults to a Previously, there was no timeout at all, which would lead to CI runs timing out consuming billable resources. |
I see - is the plan then to deprecate |
No — it serves its own purpose. Do you think 10 seconds is too low a timeout? |
Wanted to confirm before commenting further. Why not just have a single timeout value that is applied to t.timeout?
It achieves all purposes -
I think this is what mocha does. |
We have a different view on timeouts. As long as your test suite is making progress, we don't feel like we need to fail individual tests for taking too long.
(I'm closing this issue for housekeeping purposes, but let's keep the conversation going.) |
Perhaps I have been mis-understood here: We are in agreement that if a test case is making progress it should continue on. However, one should be able to extend the default global timeout setting on a per test basis. This is currently not possible per the original report of this issue. I think If my casual browsing of api.js is any good, it looks like there are two timer loops - one per test triggered by the t.timeout() and one global one triggered by the -T value. That global timer loop is the one I have an issue with. My suggestion was to use the same value as provided by Since it bounds each test, the unbounded CI billing cannot be an issue. |
We'd rather users do not obsess about how long a single test takes. Hence the notion of progress.
|
I see. I actually think the global timer is more complex for a developer to grok.
For the suite I am working on, I have necessary context in the moment. 6 months from now, or worse someone else 24+ months out, and this becomes a really expensive problem of pre-diagnosis. With the alternative being proposed, ava would flag the one offending test-case, time it out, and say
Now the developer just
In fairness, even with the global timeout, the developer would eventually land at the same destination, but with a lot of needless pre-work. I cannot emphasize enough the confusion that not having the intermediate test state causes - especially if one didn't author the suite - or authored it a long time ago. It's a non-linear function of the complexity of the system-under-test. Now we can debate the complexity of a single suite, but with my focus on execution time, IMHO I'm far above average in my sensitivity to smaller/faster test cases and therefore embedded complexity. Which also segues nicely to the timeout-value point. On low test timeoutsAll I am asking for is that level of control - not actually the default value. It's like a linter rule.
Happy to live with 10s or whatever for the out-of-the-box value and set it to my needs. One positive side effect I liked with mocha's opinion on a good test duration (IIRC 35 ms yellow, 50 ms red by default) was that it provided me awareness. I worry about it and brought it along with me when migrating to ava. For large complex test suites, I tend to lose patience when the suite exceeds a couple of minutes, looking to refactor. That behavior has saved me many hours, probably much more, in waiting to see if the change I just made passes muster suite-wide. It's made a more modular system, spiffier test runs and means I appear less insane to most people (maybe now I'm just getting carried away and should stop). Might not be for everybody, but I found it affected my system design & development behavior in very positive ways. |
With concurrently executing tests inside a single worker, and also concurrent workers hopefully maxing out your CPU, these numbers are a lot less meaningful. This also makes quick individual test timeouts problematic. The progress timer helps if the worker process becomes unresponsive, or if it takes an inordinate amount of time before tests are even declared (you can defer the first |
I agree. Apologize if this comes across as being argumentative. I was making the broader point of lower being better and not having the ability to encode the timeout was something I missed with ava. The global timer being added is almost the exact capability needed, more code, but lower ability to control/reason-about. I do think having a single timer that is propagated to each test and can be set at various scopes - global, per test, possibly per suite (t.before?) - is the simplest/best option that tends to lend itself to better control and ability to reason about a system and it's behavior under test. Given that mocha use(s/d?) a similar strategy, I do feel the pattern is broader than just my needs. I should point out that I haven't used mocha for years now - remember missing it when I first move to ava and didn't realize till this discussion how much I missed the timer. I do tend to eye-ball the numbers with ava - but that just tells you I suffer from OCD! With that, I'll rest my case. I can live with whatever you choose to do. |
ouch - it got lost in my copy/paste editing of the comment above: It's not the exactness of the timer that is of import, but the estimate-ability. I'd tend to set the timeout to ~2x - 2.5x what would be needed in a normal run. Amount of IO needed defines the multiple. Once the mental pattern gets set, it's really really hard to not write most of your "business-logic" as a pure-function and restrict the IO related operations to a few test cases. But now I am advocating for a particular development methodology - way outside the scope of this discussion. Really, over-and-out-this-time. |
i just ran into this. in most systems, configuration at leaf nodes is more specific than at parent nodes. a test is configuration is more specific/granular configuration node than global configuration. in my case, i'm setting i think @sramam's points are on the money. not honoring the way the docs are written make me think this is how it works: https://github.com/avajs/ava/blob/master/docs/07-test-timeouts.md i strongly feel that we should ensure that |
Description
Please see this for a minimal reproduction
Using ava@3.0.0-beta.2, the per test timeout setting is not being honored. Seems to need
the CLI option
-T
to be set to extend the default timeout.With the caveat that I don't have a very deep understanding of the code, a quick peek at the
source seems to indicate that support for per test timeout was not intentionally removed,
Test Source
link
Error Message & Stack Trace
(Failure) Per test timeout
(Success) Global timeout
Config
Copy the relevant section from
package.json
:Command-Line Arguments
Please see above
Relevant Links
Minimal Reproduction
Environment
Tell us which operating system you are using, as well as which versions of Node.js, npm, and AVA. Run the following to get it quickly:
The text was updated successfully, but these errors were encountered: