Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

role parameter in eks update-kubeconfig is not being used for aws cli connection #8554

Open
xcompass opened this issue Feb 28, 2024 · 4 comments
Labels
bug This issue is a bug. customization Issues related to CLI customizations (located in /awscli/customizations) eks-kubeconfig p3 This is a minor priority issue

Comments

@xcompass
Copy link

Describe the bug

It seems that the role parameter in aws eks update-kubeconfig --role arn:aws:iam::1234567890:role/ASSUMEDROLE is only being inserted in to kube config and used for kubectl connection, but not aws eks command for retrieving kubeconfig.

My use case: I have 2 AWS accounts with one eks cluster in each account. I would like to manage both clusters with account A's credential without switching back and forth accounts. So I have setup a role (ASSUMEDROLE) in account B with AssumeRole permission from a role from account A. Everything works fine except the update-kubeconfig command for cluster B in account B. I expect to get the cluster B kubeconfig by running aws eks update-kubeconfig --name clusterB --role arn:aws:iam::ACCOUNTB#:role/ASSUMEDROLE, where aws cil should use the ASSUMEDROLE in account B to connect and retrieve the config and also insert the role to kubeconfig user get-token command.

Currently, I have to create a new AWS profile and specify the role_arn to ASSUMEDROLE and source as account A profile and run aws eks update-kubeconfig --name clusterB --role arn:aws:iam::ACCOUNTB#:role/ASSUMEDROLE --profile=NEWPROFILE to get the config. However, it will also double assumes the role as the NEWPROFILE environment var is also added to kubeconfig generated by the command.

  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - ca-central-1
      - eks
      - get-token
      - --cluster-name
      - clusterB
      - --role
      - arn:aws:iam::ACCOUNTB#:role/ASSUMEDROLE
      - --output
      - json
      command: aws
      env:
      - name: AWS_PROFILE
        value: newprofile

The workaround is to remove --role in the update-kubeconfig command. However, I would like to just use a single profile. If the --role parameter is actually being used for aws eks connection, it would solve this problem.

This is also an inconsistent behavior than other commands. e.g. aws eks get-token --role, where the role was actually used for the aws eks command connection.

Others are also having into this issue:
#5823
#6389

Expected Behavior

I expect to get the cluster B kubeconfig by running aws eks update-kubeconfig --name clusterB --role arn:aws:iam::ACCOUNTB#:role/ASSUMEDROLE, where aws cil should use the ASSUMEDROLE in account B to connect and retrieve the config and also insert the role to kubeconfig user get-token command.

Current Behavior

The command only insert the role into kubeconfig, but not using the role for AWS CLI connection

Reproduction Steps

Described in description above

Possible Solution

No response

Additional Information/Context

No response

CLI version used

aws-cli/2.15.17 Python/3.11.6 Darwin/23.3.0 exe/x86_64 prompt/off

Environment details (OS name and version, etc.)

OSX 14.3 (23D56)

@xcompass xcompass added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Feb 28, 2024
@eraserx99
Copy link

eraserx99 commented Mar 19, 2024

Second that...

I have the following environment variables configured before I run the aws eks update-kubeconfig --role-arn command.

AWS_REGION=us-west-2
AWS_DEFAULT_REGION=us-west-2
AWS_ACCESS_KEY_ID=XXXXXX
AWS_SECRET_ACCESS_KEY=XXXXXX
AWS_SESSION_TOKEN=XXXXXX
AWS_CREDENTIAL_EXPIRATION=2024-03-19T20:52:49Z

When I run the command, aws eks --region us-west-2 update-kubeconfig --name ${EKS_CLUSTER_NAME} --role-arn ${IAM_ARN}, I got this.

An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name:

IMHO, I guess the implementation doesn't assume the role before it checks the existence of the EKS cluster. However, I expect the implementation to assume the role first (as described by @xcompass ).

Also, the aws eks get-token --role-arn works fine with the role arn specified (as described by @xcompass).

@danielloader
Copy link

I wish this wasn't a thing, as a cheap and nasty roundabout solution I'm being forced into doing the following (and I don't recommend it if you don't need to do this, but I don't want lots of profiles for assuming roles in my aws config...)

eval $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s aws eks --region eu-west-2 update-kubeconfig --name CLUSTERNAME --role-arn arn:aws:iam::1234567890:role/ASSUMEDROLE" \
$(aws sts assume-role \
  --role-arn arn:aws:iam::1234567890:role/ASSUMEDROLE \
  --role-session-name AWSCLI-Session \
  --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
  --output text))

@tim-finnigan tim-finnigan self-assigned this May 10, 2024
@tim-finnigan tim-finnigan added the investigating This issue is being investigated and/or work is in progress to resolve the issue. label May 10, 2024
@tim-finnigan
Copy link
Contributor

Thanks for reaching out — linking the update-kubeconfig and EKS User Guide for reference. As mentioned there:

This command constructs a configuration with prepopulated server and certificate authority data values for a specified cluster. You can specify an IAM role ARN with the --role-arn option to use for authentication when you issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-caller-identity command.

We can forward this issue to the EKS team for review as they are the owners of this customization.

@tim-finnigan tim-finnigan added eks-kubeconfig customization Issues related to CLI customizations (located in /awscli/customizations) p3 This is a minor priority issue and removed investigating This issue is being investigated and/or work is in progress to resolve the issue. needs-triage This issue or PR still needs to be triaged. labels May 10, 2024
@tim-finnigan tim-finnigan removed their assignment May 10, 2024
@allamand
Copy link

allamand commented May 16, 2024

Ideally we would need a way to tell update-kubeconfig in which aws account the EKS cluster is in case of multi-account scenario.

Look at this scenario :
image

This needs me to assume a role in sharedEKS cluster, just to retrieve the kubeconfig file. From then, I could just work with my teamA role, thanks to the EKS access entries that works cross-account.

Ideally, I would like to have a parameter allow me to specify the AWS account host for the EKS cluster like :

aws eks --region eu-west-1 update-kubeconfig --name cluster_name --account <sharedEKS>

This would allow me to retrieve the kubeconfig, only dealing with the IAM role teamA in account teamA

Or at least allow to specify an assume-role parameter to the command:

aws eks --region eu-west-1 update-kubeconfig --name cluster_name \
--assume-role-arn arn:aws:iam::sharedEKS:role/eks-cross-account 

with this, new assume-role-arn, the cli will assume this role, prior to do the update-kubeconfig. Note this could still be coupled with the --role-arn, that could be added to the generated kubernetes configuration and which can be different (role from accountA), while the assume-role-arn will link to sharedEKS account.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. customization Issues related to CLI customizations (located in /awscli/customizations) eks-kubeconfig p3 This is a minor priority issue
Projects
None yet
Development

No branches or pull requests

5 participants