Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JSON output is not emitted until tests end #1256

Open
hobovsky opened this issue Jan 26, 2024 · 4 comments
Open

JSON output is not emitted until tests end #1256

hobovsky opened this issue Jan 26, 2024 · 4 comments

Comments

@hobovsky
Copy link

With cargo test -- -Z unstable-options --format json, I seem to receive JSON lines as soon as a corresponding event is emitted. With cargo nextest run --message-format libtest-json --color never --failure-output immediate --success-output immediate, or pretty much any other combination I try, JSON is output only after all tests are complete, before the final summary. Is my observation correct? IS there a way to receive any form of machine readable output as tests progress? In my use case, I am not interested with the console feedback or human readable messages at all, I need only any form of machine readable output, but I need it to be available around the time of a corresponding test event, and not at the very end. Is it possible to satisfy my use case with currently available state of nextest? I figured out that one reason for such behavior could be that otherwise it would not be possible to avoid interleaved messages of different formats, but maybe it would be worth to have an option to suppress the original, human readable messages, and have immediate JSON lines instead?

@sunshowers
Copy link
Member

sunshowers commented Jan 27, 2024

Hi @hobovsky -- thanks for the report! It does look like that is the case at the moment. We can definitely do better here. I think this is because libtest produces its outputs test binary by test binary (suite by suite).

Wonder if @Jake-Shadle has thoughts here. Can we interleave outputs from various test binaries? Maybe as an option.

See

for test_suite in std::mem::take(&mut self.test_suites).into_values() {
self.finalize(test_suite)?;
}

@Jake-Shadle
Copy link
Contributor

Test suites are outputted as a single block to match libtest, eg.

{"type":"suite","event":"started","test_count":<n>}
{"type":"test","event":"started","name":<name>} ...
{"type":"test","event":"ok","name":<name>,"exec_time":0.009431065} ...
{"type":"suite","event":"ok","passed":<n>,"failed":0,"ignored":0,"measured":0,"filtered_out":0,"exec_time":0.150101485}

But nextest executes tests much differently so that matching libtest means all tests in a suite need to be finished before the block can be outputted, otherwise the suite start/end will have no meaning since the tests that would be outputted are essentially random and the libtest format has no way to say which suite a test is a part of.

That being said, output is not only emitted at the end, it's outputted as soon as each test suite completes

// If this is the last test of the suite, emit the test suite summary
// before emitting the entire block
if test_suite.running > 0 {
return Ok(());
}
if let Some(test_suite) = self.test_suites.remove(suite_info.binary_id.as_str()) {
self.finalize(test_suite)?;
}

but is heavily dependent on how many suites are executed and when they finish relative to the end time of the entire test run.

IMO there is no clean "fix" to this without changing the output format to either allow each test to be able to reference its parent suite by name/id, and/or remove the suite begin/end output in favor of the consumer creating their own accumulator.

@hobovsky
Copy link
Author

hobovsky commented Feb 7, 2024

Ok so I did not notice that JSON is output after every test suite because I focused too much on having test cases reported separately and I had only one test suite.

But about the order of reported events, isn't it already solved? when --no-capture is used, tests are run serially I believe exactly for this reason: so reports of events emitted to stdstream do not interleave - isn't it? Or you can always blame it on users and tell them that they need to ensure serial execution (by setting amount of runners to 1, or disabling parallelization, I am not sure what are nextests mechanism for this) if they want a well-defined output of JSON report.

@sunshowers
Copy link
Member

Ah interesting, yeah -- if tests are being run serially we can produce lines ASAP.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants