-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Add testing based on Centos containers (COMPOSER-2125) #1585
base: main
Are you sure you want to change the base?
Conversation
In the gitlab CI, we build CS9 manifests (on CS9 in AWS) already. https://gitlab.com/redhat/services/products/image-builder/ci/osbuild/-/jobs/6155161065 I think we would be better served by having a job that updates the content in the manifests more frequently than running the Fedora-based unit tests. |
What I am doing here precisely is:
From time to time we have a situation when centos testing fails due to upstream changes and we detect this with the 2 weeks delay. I know it will delay PR review as running centos tests takes more than 2h. I think we should mark this as optional from the review point of view or run nightly. Still, we would be able detect "potential" centos / rhel issue way faster than in 2 weeks from PR being merged. |
Right, that makes sense. Thanks for the explanation! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine to merge now but maybe we can think about doing these nightly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code and approach are both fine - thanks @elkoniu for figuring this out! ❤️
At the same time, >2h sounds like a hefty price to pay on every PR.
Plus, I'm not a strong believer in non-blocking CI, although sometimes it makes sense and maybe this is one of those times.
Combining the two points above I wonder if a nightly test wouldn't be a good compromise. We would still get the benefit of early notifications when something sneaks into main
that breaks downstream, but it wouldn't slow down development too much.
All it would take is changing the on:
condition to a cron schedule and upon failure we can notify ourselves via the #osbuild-release Slack channel.
WDYT?
Happy to wait for @thozza to be back to get to the final decision.
@achilleas-k and @ochosi - thank you both for the review. I will update the code tomorrow with:
I think we should wait for @thozza final verdict as I am using some centos downstream scripts bit "nasty" way instead of copying them to repo (as usually our upstream feeds downstream, not the other way around:) ) |
f6da238
to
23eb210
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work on this Pawel, much appreciated.
Unfortunately, I do not think that this approach is correct and it would IMO create problems in the future. I added a longer comment inline, but in short, the test should not depend on any downstream code from CentOS Stream. The whole idea is to run the unit tests in a pre-built CS containers (in addition to the Fedora container that is used now).
9894b76
to
9b5653c
Compare
Centos based tests unwraps to different pytest settings.
Centos9 settings:
I have encapsulated this final pytest parameters into the new CI jobs and removed dependency from downstream code. |
bc30e3a
to
03ac198
Compare
cda964e
to
e64dc4a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes. I like this version!
I've added a few comments in-line.
Could you please try testing the action as a PR in your own repo against main
? Because from what I can tell, the action was not executed as part of this PR, so I can't tell if it works.
@thozza thanks for the review \o/
As @ochosi suggested I switched from calling this action on every PR to a nightly schema. I will add run result from my fork in comment. |
e64dc4a
to
c341870
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the changes. I like how it all turned out and how simple is the action. 🥳
Happy to approve once we resolve the remaining discussions and after you can provide an example run of the action.
c341870
to
f56ca7c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine by me!
Not sure about osbuild testing... We touched upon this part of the CI a bit in the osbuild CI hackathon, but I'm not sure if this would be helpful here, too.
f56ca7c
to
ca19962
Compare
ca19962
to
91320e1
Compare
91320e1
to
4d1485b
Compare
Clicked the rebase button. |
FTR, this PR is good to go from my PoV, I'm waiting only on a proof of a successful run of the pipeline (on a fork). |
This PR is stale because it has been open 30 days with no activity. Remove "Stale" label or comment or this will be closed in 7 days. |
@elkoniu what is the status of this PR? |
4d1485b
to
e4b588e
Compare
In the release loop upstream changes are merged to Centos every two weeks. This creates a delay in error detection when new tests being added upstream. Running tests in Centos based containers on top of the upstream code more frequently should speed up error detection.
e4b588e
to
1e8703a
Compare
In the release loop upstream changes are merged to Centos every two weeks. This creates a delay in error detection when new tests being added upstream.
Running tests in Centos based containers for every pull request should solve the issue.
https://issues.redhat.com/browse/COMPOSER-2125