Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-5.15] Define SKOPEO_CIDEV_CONTAINER_FQIN #1344

Closed
wants to merge 1 commit into from

Conversation

mtrmac
Copy link
Collaborator

@mtrmac mtrmac commented Aug 12, 2021

otherwise Skopeo tests fail with

ERROR: Environment variable 'SKOPEO_CIDEV_CONTAINER_FQIN' is required by contrib/cirrus/runner.sh:_run_setup() but empty or entirely white-space. (/usr/share/automation/lib/console_output.sh:97 in req_env_vars())

A backport of #1334 .

otherwise Skopeo tests fail with
> ERROR: Environment variable 'SKOPEO_CIDEV_CONTAINER_FQIN' is required by contrib/cirrus/runner.sh:_run_setup() but empty or entirely white-space.  (/usr/share/automation/lib/console_output.sh:97 in req_env_vars())

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
@mtrmac
Copy link
Collaborator Author

mtrmac commented Aug 12, 2021

@cevich PTAL; this is targeted at the release-5.15 branch of c/image. Does that need branch-specific adjustments?

Cc: @TomSweeneyRedHat

@mtrmac mtrmac changed the title Define SKOPEO_CIDEV_CONTAINER_FQIN [release-5.15] Define SKOPEO_CIDEV_CONTAINER_FQIN Aug 12, 2021
@mtrmac mtrmac closed this Aug 12, 2021
@mtrmac mtrmac deleted the backport-fqin branch August 12, 2021 21:02
@cevich
Copy link
Member

cevich commented Aug 16, 2021

Does that need branch-specific adjustments?

@mtrmac did you mean to close this?

Yes, assuming this is a new release (I'm not sure how old this branch is WRT main). The workflow for other projects is:

  • $DEST_BRANCH (and SKOPEO_CI_TAG) both need updating.
  • A new entry is added to the cirrus-cron table (to keep the VM images alive for the long-term).
  • Future updates of IMAGE_SUFFIX should not happen on the branch without a really (REALLY) good reason.
    For release-branches the goal is stability, so get it working on initial branching, then leave it alone.

That said, we don't have a cirrus-cron setup on this repo (should we?). This has consequences - If there is no activity in 30 days, the VM images will get deleted automatically. This will break CI for any future backport PRs. The alternative (on release branches) is to disable CI (assumed better than having a guaranteed fail).

Let me know what you'd like. Setting up cirrus-cron + fail-mail notifications isn't too much work for me, maybe half-day or so. I just need to know which release branches need to be included.

@mtrmac
Copy link
Collaborator Author

mtrmac commented Aug 16, 2021

Does that need branch-specific adjustments?

@mtrmac did you mean to close this?

I’m sorry for not writing it down, this happened as part of #1343.

Yes, assuming this is a new release (I'm not sure how old this branch is WRT main).

The 5.15 branch is pretty new, but will obviously get progressively more stale.

The workflow for other projects is:

  • $DEST_BRANCH (and SKOPEO_CI_TAG) both need updating.
  • A new entry is added to the cirrus-cron table (to keep the VM images alive for the long-term).
  • Future updates of IMAGE_SUFFIX should not happen on the branch without a really (REALLY) good reason.
    For release-branches the goal is stability, so get it working on initial branching, then leave it alone.

TO DO: Are we using the frozen Skopeo branch?

That said, we don't have a cirrus-cron setup on this repo (should we?). This has consequences - If there is no activity in 30 days, the VM images will get deleted automatically.

AFAICS .cirrus.yml does not point at any c/image-specific images; a VM is used for test-skopeo, so it seems reasonable to me to let Skopeo drive this, and to just keep c/image in sync; if there is a cirrus-cron for Skopeo (is there?), would that work?

Let me know what you'd like. Setting up cirrus-cron + fail-mail notifications isn't too much work for me, maybe half-day or so. I just need to know which release branches need to be included.

@mtrmac
Copy link
Collaborator Author

mtrmac commented Aug 16, 2021

@cevich So maybe I‘m paying more attention now — should this PR should have been something like #1346 instead, making c/image CI on this branch rely on the corresponding Skopeo branch?

@cevich
Copy link
Member

cevich commented Aug 17, 2021

I’m sorry for not writing it down, this happened as part of #1343.

Oh good, okay, thanks for the reference.

So maybe I‘m paying more attention now

Yeah, sorry, it's a little brain-hurty, so I'll go slow 😀 My goal/intention is to make things as maintenance and developer-friendly as possible, including limiting surprise "CI can't work" type failures. The main thing I'm concerned about is there tend to not be a lot of PRs on old branches. So if a backport is needed, and CI is needed, it would be nice if it didn't suddenly break because the VM images aged-out due to inactivity.

Yes, we already have cirrus-cron setup on Skopeo, but I'm not sure how well it's release branches correspond to the branches here. If they're always/must/guaranteed kept in sync, then there's no problem - the Skopeo cirrus-cron setup can cover for the VMs here. The key is the $IMAGE_SUFFIX value and the "VM img. keepalive" (meta_task) task in .cirrus.yml. We want to run that task (somewhere) with the same $IMAGE_SUFFIX representing the VMs to preserve. At least once every 30 days minimum.

There is NO practical way to rebuild old VM images once they're lost - as with rm, it's a one-way street. In the other container's repos, we run daily jobs. Except for branches where we don't care about future-CI working. Then we patch .cirrus.yml (or delete it) to reflect this - thereby limiting developer surprises in PRs. That's really what it's all about.

So give it a think, and let me know if/how I can help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants