Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(chunked): use chunked upload limited to 32M buffer #1869

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
23 changes: 18 additions & 5 deletions docker/docker_image_dest.go
Original file line number Diff line number Diff line change
Expand Up @@ -178,14 +178,26 @@ func (d *dockerImageDestination) PutBlobWithOptions(ctx context.Context, stream
stream = io.TeeReader(stream, sizeCounter)

uploadLocation, err = func() (*url.URL, error) { // A scope for defer
maxChunkSize := int64(32 * 1024 * 1024) // FIXME: if not specified at cmdline, use the whole length
Copy link

@haampie haampie Jul 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you really want to implement this, you should definitely make it opt-in. As I explained in the thread, some registries (such as ghcr.io) have rather small upload size limits, only when doing chunked uploads. Monolithic uploads don't have those limits.

uploadReader := uploadreader.NewUploadReader(stream)
// This error text should never be user-visible, we terminate only after makeRequestToResolvedURL
// returns, so there isn’t a way for the error text to be provided to any of our callers.
defer uploadReader.Terminate(errors.New("Reading data from an already terminated upload"))
res, err = d.c.makeRequestToResolvedURL(ctx, http.MethodPatch, uploadLocation, map[string][]string{"Content-Type": {"application/octet-stream"}}, uploadReader, inputInfo.Size, v2Auth, nil)
if err != nil {
logrus.Debugf("Error uploading layer chunked %v", err)
return nil, err
for streamedLen := int64(0); streamedLen < inputInfo.Size; {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inputInfo.Size can be -1 meaning “unknown” (and this is actually the most common case, for podman push). So this wouldn’t work.

var chunkLen int64
if inputInfo.Size-streamedLen < maxChunkSize {
chunkLen = inputInfo.Size - streamedLen
} else {
chunkLen = maxChunkSize
}
rangeHdr := fmt.Sprintf("%d-%d", streamedLen, streamedLen+chunkLen-1)
res, err = d.c.makeRequestToResolvedURL(ctx, http.MethodPatch, uploadLocation, map[string][]string{"Content-Type": {"application/octet-stream"}, "Content-Range": {rangeHdr}}, io.LimitReader(uploadReader, chunkLen), inputInfo.Size, v2Auth, nil)
if err != nil {
logrus.Debugf("Error uploading layer chunked %v", err)
return nil, err
}

streamedLen += chunkLen
}
defer res.Body.Close()
if !successStatus(res.StatusCode) {
Expand All @@ -207,7 +219,8 @@ func (d *dockerImageDestination) PutBlobWithOptions(ctx context.Context, stream
locationQuery := uploadLocation.Query()
locationQuery.Set("digest", blobDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL(ctx, http.MethodPut, uploadLocation, map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, v2Auth, nil)
rangeHdr := fmt.Sprintf("%d-%d", inputInfo.Size, inputInfo.Size)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not really a correct Content-Range value for this operation, ranges are inclusive.

res, err = d.c.makeRequestToResolvedURL(ctx, http.MethodPut, uploadLocation, map[string][]string{"Content-Type": {"application/octet-stream"}, "Content-Range": {rangeHdr}}, nil, -1, v2Auth, nil)
if err != nil {
return types.BlobInfo{}, err
}
Expand Down