Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Object Store Access] Context Deadline exceeded after upgrading to v0.32.4 from v0.31.0 #6785

Open
jnyi opened this issue Oct 9, 2023 · 15 comments

Comments

@jnyi
Copy link
Contributor

jnyi commented Oct 9, 2023

Thanos, Prometheus and Golang version used:

  • v0.31.0 --> v0.32.4

Object Storage Provider: AWS s3

What happened: We used k8s and assume role for accessing AWS s3, it worked fine with thanos v0.31.0, after upgrading to v0.32.4, we start seeing compactor complaining, seems regression related to permission.

What you expected to happen: thanos compactor can talk to AWS s3 using assumeRole.

How to reproduce it (as minimally and precisely as possible): normal s3 configuration without users/password, assume role of k8s worker to s3

Full logs to relevant components:

ts=2023-10-09T18:20:13.796433303Z caller=blocks_cleaner.go:44 level=info name=thanos-compactor msg="started cleaning of blocks marked for deletion"
ts=2023-10-09T18:20:13.797847735Z caller=blocks_cleaner.go:58 level=info name=thanos-compactor msg="cleaning of blocks marked for deletion done"
ts=2023-10-09T18:24:50.467766592Z caller=runutil.go:100 level=error name=thanos-compactor msg="function failed. Retrying in next tick" err="BaseFetcher: iter bucket: Get \"https://<redacted bucket>.s3.dualstack.us-west-2.amazonaws.com/?continuation-token=<redacted>&delimiter=&encoding-type=url&fetch-owner=true&list-type=2&prefix=thanos%2Foregon-dev%2F\": context deadline exceeded"
ts=2023-10-09T18:24:50.46789616Z caller=compact.go:597 level=error name=thanos-compactor msg="retriable error" err="BaseFetcher: iter bucket: Get \"https://<redacted bucket>.s3.dualstack.us-west-2.amazonaws.com/?continuation-token=<redacted>&delimiter=&encoding-type=url&fetch-owner=true&list-type=2&prefix=thanos%2Foregon-dev%2F\": context deadline exceeded"

s3 config:

"config":
  "bucket": "<bucket>"
  "endpoint": "s3.us-west-2.amazonaws.com"
  "insecure": false
  "region": "us-west-2"
  "signature_version2": false
"prefix": "thanos/oregon-dev"
"type": "S3"

IAM assume role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com",
                "AWS": "arn:aws:iam::<aws account id>:role/KubernetesRoles-IAMRoleWorker-<redacted>"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Anything else we need to know:

@yeya24
Copy link
Contributor

yeya24 commented Oct 16, 2023

@jnyi Is this reproducible every time? It looks like a small network issue that should happen rarely.

@jnyi
Copy link
Contributor Author

jnyi commented Oct 23, 2023

it is reproducible, we are using IAM assume role in AWS instead of direct username/password, roll back to older version fixed this issue.

@Pacobart
Copy link

Also notice in the config that the user set the s3 endpoint as s3.us-west-2.amazonaws.com but the logs show s3.dualstack.us-west-2.amazonaws.com. This another issue #6804

@yeya24
Copy link
Contributor

yeya24 commented Oct 23, 2023

Umm, @Pacobart did you see Thanos using dualstack endpoint only recently after the upgrade?
Looks like dualstack was used since 2019 minio/minio-go#1055

@Pacobart
Copy link

Umm, @Pacobart did you see Thanos using dualstack endpoint only recently after the upgrade? Looks like dualstack was used since 2019 minio/minio-go#1055

Great find! This is unfortunate they did that but at least I know now where it's coming from.

@yeya24
Copy link
Contributor

yeya24 commented Nov 27, 2023

@jnyi We got similar issues after updating Thanos version. Now iterating the whole bucket is required due to #6474. The list objects request might take a long time and timed out.
If you are using S3 and enable versioned bucket, you can try to clean up some old versions and try again. This ideally should improve the list performance.

@fpetkovski
Copy link
Contributor

Is there a way to skip old version when iterating objects in S3? Maybe that is something we can do on our side.

@jnyi
Copy link
Contributor Author

jnyi commented Dec 4, 2023

Speaking about the object storage layout, taking Grafana Mimir as an example, they have layout breakdown by tenant while thanos today put all tenants + raw resolution + 5m/1h downsampled blocks under the same prefix, will prefix them separately and only iterate related logic subpaths make it simpler?

@yeya24
Copy link
Contributor

yeya24 commented Dec 4, 2023

Like Cortex there is bucket index to help speed up the time when loading blocks. I think Thanos might be able to have similar things to avoid iterating the whole bucket every time.

@fpetkovski
Copy link
Contributor

Bucket index would be a great addition, I wonder how hard it is to merge that code from Cortex. However, we do need a short term solution to unblock people from upgrading.

@MichaHoffmann
Copy link
Contributor

Can we update a directory besides the blocks where just files with names of blocks are? everytime we add blocks we write a new file, everytime we delete stuff we delete the fiels there. then we only need to look there for blocks

@MichaHoffmann
Copy link
Contributor

Bucket index would be a great addition, I wonder how hard it is to merge that code from Cortex. However, we do need a short term solution to unblock people from upgrading.

for short term we can introduce a hidden flag to opt into the new behavior maybe and long term we can go towards a block index solution; wdyt?

@yeya24
Copy link
Contributor

yeya24 commented Dec 5, 2023

https://cortexmetrics.io/docs/blocks-storage/bucket-index/ Sharing Cortex bucket index doc.
Thanos might not need all the features from it but idea is the same

@michaelswierszcz
Copy link

michaelswierszcz commented Feb 5, 2024

--block-viewer.global.sync-block-timeout=5m
--block-viewer.global.sync-block-interval=1m

https://thanos.io/tip/components/compact.md/

changing these defaults on thanos compactor fixes the iter context deadline timeouts in my cluster

@BouchaaraAdil
Copy link
Contributor

--block-viewer.global.sync-block-timeout=5m
--block-viewer.global.sync-block-interval=1m

i confirm these worked. thanks @michaelswierszcz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants