Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Present more detailed information from CI results #572

Open
keturn opened this issue May 9, 2020 · 6 comments
Open

Present more detailed information from CI results #572

keturn opened this issue May 9, 2020 · 6 comments

Comments

@keturn
Copy link
Member

keturn commented May 9, 2020

The current checks that run on pull requests (also known as validation actions, or continuous integration) produce output that looks like a teletype that's trying to save paper by not including stack traces for failing tests.

The output from these tools can be a lot easier to read, and much more informative, to help authors, peers, and mentors discover why something is not passing a test. Without needing to get to their own development machine and check out the branch to try to reproduce the results.

I'm not suggesting we reinvent the wheel or use anything cutting-edge, there's been plenty done in this field. We're using junit, which set the standards that a lot of other tools came to follow, so I hope that makes it easy to find something compatible.

This GitHub Actions forum thread on Publishing Test Results concludes there aren't good options for presenting this information in the limited interface available to the GitHub Action itself, but there are third-party integrations that are free for open source repositories.

There are a couple options that turned up when searching for something GitHub-Action-compatible and might be worth a closer look:

(This would be a more complete way to address #557)

@keturn
Copy link
Member Author

keturn commented May 9, 2020

It sounds like Cervator would like the opportunity to address the maintainability concerns that have come up with Jenkins, and offer Jenkins as a way of providing this sort of interface to build results. There's probably an issue for that in another repo or a card on a trello board somewhere?

@keturn
Copy link
Member Author

keturn commented May 10, 2020

Here's a more concrete example of what's missing from the current view, as compared to Jenkins:

test-runner-side-by-side z

(@jdrueckert, I hope this gives you the specifics you were asking for yesterday in discord)

@skaldarnar
Copy link
Member

skaldarnar commented May 10, 2020

I'd like to mention Github Checks API in this context: https://developer.github.com/v3/checks/

There are Github Apps integrating with it to annotate the code with what went wrong (see https://github.com/marketplace/check-run-reporter) - maybe an even better option than a nested view in Jenkins?

@keturn
Copy link
Member Author

keturn commented May 10, 2020

Annotations do sound like a great feature, but the thread I linked earlier says they have some significant limitations that mean they can only provide part of the answer. In particular,

Annotations can only be displayed on changed files. If you have a file that was not changed but the test is failing for that file you cannot display an annotation there.

@skaldarnar
Copy link
Member

skaldarnar commented May 11, 2020

Looking at Jabref I see that they set up different jobs for different tests in their pipeline - maybe that's something we could do as an intermediate step to get a bit more visibility on build and test results: https://github.com/JabRef/jabref/blob/master/.github/workflows/tests.yml

image

@keturn
Copy link
Member Author

keturn commented May 11, 2020

That'd be an improvement, yeah. If it presented separate jobs for each of the things check is doing now (checkstyle, test, and whatever else), it would make it easier to see which part failed and and the relevant output.

On the other hand, with the checks running as fast as they do, the overhead for each one being a job that has to set up its own runtime might be pretty big in comparison. 🤔

I don't think that's a real blocker, anything we use that has this sort of dynamic worker allocation is going to be the same way. It'll still be plenty fast, assuming there's no shortage of workers in the pool. It's just a little resource-hungry. 🚚

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants