Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

containers.conf: add field for AddCompression to Engine table #1593

Merged
merged 1 commit into from
Aug 24, 2023

Conversation

flouthoc
Copy link
Collaborator

@flouthoc flouthoc commented Aug 2, 2023

Allows users to set default value of AddCompression to Engine table so users of podman and buildah can use containers/buildah#4912 by default.

Closes: containers/buildah#4912 (comment)

@flouthoc
Copy link
Collaborator Author

flouthoc commented Aug 2, 2023

See: containers/buildah#4912 (comment) where original request for this field created.

@containers/podman-maintainers PTAL

Copy link
Member

@vrothberg vrothberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add an entry to the default containers.conf in pkg/config and a test to make sure that setting the option works. Seehttps://github.com/containers/common/commit/4f1c361908678e80831bb24988f87cffc86f6965 as an example.

@@ -424,6 +424,12 @@ The `engine` table contains configuration options used to set up container engin

Name of destination for accessing the Podman service. See SERVICE DESTINATION TABLE below.

**add-compression**=[]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**add-compression**=[]
**add_compression**=[]

Comment on lines 429 to 431
List of compression algorithms. If set makes sure that requested compression variant
for each platform is added to the manifest list keeping original instance intact in
the same manifest list on every `manifest push`. Supported values are (`gzip`, `zstd` and `zstd:chunked`)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As someone who has not been looking at any of your work this description seems a bit short to me. What is the difference to the compression_format option? What happens when the original instance already contains the correct compression, I assume its a NOP in this case?

I think this needs more description and some cross references between these two options.

For example what happens if I have have compression_format="gzip" and add_compression=["zstd"], would the final manifest include both gzip and zstd?

Also maybe it is better to rename this to something like compression_manifest_formats? This would make the option be sorted next to the other compression options which makes it easier to find and follow for users IMO.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example what happens if I have have compression_format="gzip" and add_compression=["zstd"], would the final manifest include both gzip and zstd?

@Luap99 Yes that is correct, now we can create multi-compression-variant manifest. I will extend documentation to explain this more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This question makes the CLI too complex. We need to make this simple for the user, and my original goal was to allow containers.conf to specify zstd:chunked, gzip format, and hide the fact from the users.

Then if the user only wanted to deal with gzip, they could specify --compression gzip.

Copy link
Collaborator

@mtrmac mtrmac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not opposed to adding an option, but I think right now it is premature.

The implementation we have only works for {buildah,podman} manifest push, where the user is explicitly dealing with a multi-instance manifest list. And it seems to me there are two use cases:

  • The image is pre-existing and the user is making a small local edit (IIRC, though I’m not sure, that can even happen without pulling all of the image locally). In that case we should not just silently double the size of that multi-image without prompting.
  • The image is an entirely new, built locally or via some other-arch workers. In that case, sure, a containers.conf option to specify the default compression makes sense.

What is currently missing, and I think critical to have, or at least to consider, is to implement creation of a multi-compression image on a basic podman push.

And it seems to me that if we add an option, that should cover both the “podman manifest push of a new manifest”, and ”basic podman push”, cases. So not a manifest push-specific add-compression option.


Also I’m afraid we need to discuss both the UI flow, and the exact semantics (for c/image/copy requireCompressionFormatMatch) of both the CLI and containers.conf options, as a package. We have the --add-compression semantics ~settled (though still open to change) for the podman manifest push; it’s less clear for podman push.


Basically what I’m thinking is that we should have a new-image-compression option that applies to podman push (principally for locally-built images, because users Should Know that podman pull&&podman push is not a good way to copy images) and podman manifest creating new manifests.

(And at the implementation level, treating “pushing new images” differently from generic edits is attractive because we know the source image is uncompressed, so the complexity of dealing with pre-existing mixed-compression images, to a large part, goes away.)

That option, ideally, wouldn’t apply to edits of existing manifest lists; users can do that via manual CLI options if that’s the kind of edit they want to do.


Overall, it seems to me that having the option named “add compression” seems subtly but non-trivially badly targeted.


Yes, using the same option as compression_format seems attractive. Two downsides:

  • The existing compression_format prefers to use the on-registry version, even if it uses some other compression. That’s typically much faster, so it’s a trade-off. OTOH for creating multi-compression multi-images, we would probably prefer to use exactly the desired algorithm. Fitting that into the existing option seems a stretch. (Would requireCompressionFormatMatch change depending on whether the option contains one or > 1 entry??)
  • Compression levels are algorithm-specific; we can’t combine a list of compression_format values with a single global level option. (I wouldn’t object to deciding that we just don’t care about compression levels, noting that some users seem to care quite a bit.)

So I think we need new option name either way (but we might want to set it by default, and not set compression_format by default any more.).


My first stab would be new_image_compression_formats, as a some kind of array of (algorithm, optional level) data. I don’t know whether that would be better as a native TOML or a string with a custom parser.

  • That’s long… maybe compression_formats with the “new image” part only in documentation would be a reasonable trade-off.
  • I’m not actually sure that we can differentiate “new image” for podman manifest workflows. podman manifest push presumably doesn’t know whether the image originated locally.

@mtrmac
Copy link
Collaborator

mtrmac commented Aug 2, 2023

Also I’m afraid we need to discuss both the UI flow, and the exact semantics … of both the CLI and containers.conf options, as a package

As Dan points out, which CLI options override which config file defaults will also be … interesting.

@rhatdan
Copy link
Member

rhatdan commented Aug 16, 2023

@flouthoc any update on this?

Copy link
Member

@giuseppe giuseppe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 24, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: flouthoc, giuseppe

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@flouthoc
Copy link
Collaborator Author

Is failing test even related to this PR ?

@vrothberg
Copy link
Member

Is failing test even related to this PR ?

Please run tests locally before pushing to save some $$$ and energy.

sudo GOPATH=~/go go test -v ./pkg/config gives me

Summarizing 2 Failures:
  [FAIL] Config ValidateConfig [It] should succeed with default config
  /home/vrothberg/containers/common/pkg/config/config_test.go:48
  [FAIL] Config Local [It] AddCompression
  /home/vrothberg/containers/common/pkg/config/config_local_test.go:492

Allows users to set default value of `AddCompression` to Engine table so
users can use containers/buildah#4912 by
default.

Closes: containers/buildah#4912 (comment)

Signed-off-by: Aditya R <arajan@redhat.com>
@flouthoc
Copy link
Collaborator Author

Is failing test even related to this PR ?

Please run tests locally before pushing to save some $$$ and energy.

sudo GOPATH=~/go go test -v ./pkg/config gives me

Summarizing 2 Failures:
  [FAIL] Config ValidateConfig [It] should succeed with default config
  /home/vrothberg/containers/common/pkg/config/config_test.go:48
  [FAIL] Config Local [It] AddCompression
  /home/vrothberg/containers/common/pkg/config/config_local_test.go:492

Ah my bad, I missed running added tests locally. Should be good now.

@rhatdan
Copy link
Member

rhatdan commented Aug 24, 2023

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Aug 24, 2023
@openshift-merge-robot openshift-merge-robot merged commit 2a54181 into containers:main Aug 24, 2023
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants