You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey folks, I'm struggling to get this working in an Azure Kubernetes Service cluster. Here's what I've done up to this point, and would love any info you can provide on what I've done wrong, where to look for troubleshooting, or possible repos to PR that patch to...
Build image is being created in a GL CI pipeline using a key stored in the Vault. All goes well....
cosign: A tool for Container Signing, Verification and Storage in an OCI registry.
GitVersion: v2.2.0
GitCommit: 546f1c5b91ef58d6b034a402d0211d980184a0e5
GitTreeState: clean
BuildDate: 2023-08-31T18:52:52Z
GoVersion: go1.21.0
Compiler: gc
Platform: linux/amd64
tlog entry created with index: 89560215
Pushing signature to: dminfraacrrmz5nifl.azurecr.io/dm/api
Manually verifying the image signature works fine.
$ echo ${IMAGE_NAME}@${MANIFEST_DIGEST}
dminfraacrrmz5nifl.azurecr.io/dm/api:v47@sha256:d211818dca7fa35514bcbc9a280bbbb70a1e136912d2aea9723d946219172865
$ cosign verify --key cosign.pub "${IMAGE_NAME}@${MANIFEST_DIGEST}"
Verification for dminfraacrrmz5nifl.azurecr.io/dm/api@sha256:d211818dca7fa35514bcbc9a280bbbb70a1e136912d2aea9723d946219172865 --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- Existence of the claims in the transparency log was verified offline
- The signatures were verified against the specified public key
Next, we set up the policy controller in AKS using helm:
The custom label and annotations maps the policy controller pod to an Azure Service Principal with AcrPull permissions. We can see that the service account is created with the annotations.
Running the following command fails in AKS, but the command DOES work in an EKS cluster. How are these different? Is this failure really just the same authentication failure in the second command?
$ echo ${IMAGE_NAME}
dminfraacrrmz5nifl.azurecr.io/dm/api:v47
$ k run -n sig-test dm-web-test --image ${IMAGE_NAME}
Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: invalid value: dminfraacrrmz5nifl.azurecr.io/dm/api:v47 must be an image digest: spec.containers[0].image
Updating the image to include the digest also fails with an UNAUTHORIZED error message when I'd expect the workload identity environment variables to enable authentication to the private registry.
$ echo ${IMAGE_NAME}@${MANIFEST_DIGEST}
dminfraacrrmz5nifl.azurecr.io/dm/api:v47@sha256:d211818dca7fa35514bcbc9a280bbbb70a1e136912d2aea9723d946219172865
$ k run -n sig-test dm-web-test --image ${IMAGE_NAME}@${MANIFEST_DIGEST}
Warning: failed policy: trust-signed-dm-images: spec.containers[0].image
Warning: dminfraacrrmz5nifl.azurecr.io/dm/api:v47@sha256:d211818dca7fa35514bcbc9a280bbbb70a1e136912d2aea9723d946219172865 signature key validation failed for authority authority-0 for dminfraacrrmz5nifl.azurecr.io/dm/api@sha256:d211818dca7fa35514bcbc9a280bbbb70a1e136912d2aea9723d946219172865: GET https://dminfraacrrmz5nifl.azurecr.io/oauth2/token?scope=repository%3Adm%2Fapi%3Apull&service=dminfraacrrmz5nifl.azurecr.io: UNAUTHORIZED: authentication required, visit https://aka.ms/acr/authorization for more information.
The text was updated successfully, but these errors were encountered:
I've not run this on Azure, so can't say for sure. But I'd try to rule out the auth, by creating a simple test image and require signing. Easy way to test would be to just create a simple image that doesn't require auth maybe using something like ttl.sh here (in step 3): https://www.chainguard.dev/unchained/policy-controller-101
And see if it's related to auth. Ofc if you have another way to run a container that doesn't require auth, I'd try that.
The acr auth library upgrade is needed, as the acr help library is very dated and uses end of life libraries from MS. (#1424). I also never got this working using workload identity, so I think the upgrade will help with that.
However, I was able to get this working using the managed identity on the worker nodes and granting the kubelet identity permission to the ACR repository:
Description
Hey folks, I'm struggling to get this working in an Azure Kubernetes Service cluster. Here's what I've done up to this point, and would love any info you can provide on what I've done wrong, where to look for troubleshooting, or possible repos to PR that patch to...
The custom label and annotations maps the policy controller pod to an Azure Service Principal with AcrPull permissions. We can see that the service account is created with the annotations.
The pod is created correctly with the environment variables and volume mount:
Failures
Running the following command fails in AKS, but the command DOES work in an EKS cluster. How are these different? Is this failure really just the same authentication failure in the second command?
Updating the image to include the digest also fails with an UNAUTHORIZED error message when I'd expect the workload identity environment variables to enable authentication to the private registry.
The text was updated successfully, but these errors were encountered: