New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🚀 Feature: Report file name where a test fails #3200
Comments
The error stack is not guaranteed to contain the filename of the failing test. If the test times out, or if it fails due to an asynchronously thrown exception, I often have to go hunting. In my application this ends up
being about fifty percent of the time.
|
An example:
|
Fair enough, this could be added in the |
I would like to fix this bug but have an issue with the suggestion from @Bamieh. The epilogue only has access to the flat stats object. {
"suites": 1,
"tests": 1,
"passes": 0,
"pending": 0,
"failures": 1,
"start": "2018-01-15T20:29:17.733Z",
"end": "2018-01-15T20:30:24.430Z",
"duration": 66697
} The runnable.js has 2 points where this error might be output (line 239 and line 299). I could just add it to the end of the string at that point.
The result looks like:
How would adding it to the epilogue stats be helpful? I'm trying to understand as I don't know the interior of the library that well. |
We need to know exactly where this information is supposed to be printed by a reporter, and which reporters print it, or don't. Let's put the brakes on rushing to change things before we agree what we expect Mocha to do. |
For timeouts: Can this problem not be solved by making your suites and test names more unique? The spec reporter will print the timeout message directly after the suite and test name. Note that we can't generate a stack trace because the test doesn't actually fail on its own (among other reasons like #3119). For async exceptions: When an async exception is thrown, you should get a stack trace. Do you have an example of where this is not happening? |
@boneskull Here is a minimal example. //lib.js
function f (shouldThrow, callback) {
if (shouldThrow) {
return setTimeout(()=>{throw Error()})
}
return setTimeout(()=>callback())
}
exports.f = f
//passing_test.js
const f = require('./lib').f
describe('f', () => {
describe('with true', () => {
it('eventually passes', callback => {
return f(true, callback)
})
})
})
//throwing_test.js
$ cat throwing_test.js
const f = require('./lib').f
describe('f', () => {
describe('with false', () => {
it('eventually passes', callback => {
return f(false, callback)
})
})
}) In my ideal scenario, something in the output would mention Here is the actual output:
As you can see the origin of the stack trace is the library (which is often useful, just not as immediately useful as the location of the failing test) |
the example is a little confusing due to the naming of the tests |
anyway, I don't really see how it's possible to show the originating test without:
Of the above tools, I don't know to what extent they work in any given environment. It'd either need to work in both modern browsers and supported Node.js versions, or we'd have to abstract it. Any way you slice it, it won't be easy. We'd be able to retain the filename, test and suite name(s) for errors like this, but I don't see how we can get an actual stack trace since we can't just create Mind you: this is a capability I want Mocha to have... more discovery & requirements are necessary though. |
@boneskull I would like to add the test case but I'm not sure how to do that. I see that there isn't one in the tests so far but what would it look like. I'm still learning... |
So this issue is about long stack traces rather than just displaying the filename the failing test is in? I think both would be useful and the latter would be much easier. |
Personally I have would have far less use for a stack trace than filename/line number. |
@vkarpov15 No, the stack trace is not going to be feasible. It would be one way of getting at the filename & line number. The line number is also not going to be feasible. What is possible is:
This information would need to be retained via one of the several modules mentioned in this comment. @dfberry This would involve many changes to very touchy code, FWIW. What has to happen is we need to basically "track" when a test (or code under test) adds an "async" task to the runtime's internal message queue. Examples:
What makes this potentially more problematic (depending on the solution we choose) is:
A couple ideas then:
|
That being said, the groundwork for such a thing could open things up for Mocha to be extra helpful in tracking down problems... it's something I would want to investigate further. |
relevant: #3223 |
I wrote a quick and dirty reporter plugin that seems to be working to accomplish this. mocha-spec-reporter-with-file-names output looks like this: |
I've been bit by this before. It's quite annoying to run a large test suite containing dynamically generated test names with misleading call stacks, then have to manually figure out where the tests are coming from. https://www.npmjs.com/package/mocha-spec-reporter-with-file-names looks like a pretty great solution. Nice job @electrovir 😄. Per #5027, we want to be very careful with landing new big features. Even if they're in output designed to be human-readable and not consumed by machines. So this would be a cc @mochajs/maintenance-crew: any objections? |
Talking with @voxpelli: this is an issue, interestingly, with quite a few default reporters.
This brings up another question: should the file name always be there? Or just when the test fails? We're thinking always, for consistency. Note that built-in reporters generally use the |
Is it possible to have mocha report the filename containing a failed test? Could I write a reporter that did this? Or would I have to build support for it, and then write such a reporter? Not at all familiar w/ the mocha codebase yet, but I'd be willing to take a dive in and try out contributing this if the Mocha community would welcome it.
After a test fails, I frequently find myself grepping the title of the test to determine the filename so I can then run the test in isolation.
Reporting the filename would save me a step.
The text was updated successfully, but these errors were encountered: