Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: APIExport permission claims are not respected #2818

Open
lionelvillard opened this issue Feb 20, 2023 · 5 comments
Open

bug: APIExport permission claims are not respected #2818

lionelvillard opened this issue Feb 20, 2023 · 5 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@lionelvillard
Copy link
Contributor

Describe the bug

APIExport permission claims are not respected.

Steps To Reproduce

  • In one terminal, start kcp (kcp start)
  • Open a new terminal
  • Run export KUBECONFIG=$(pwd)/.kcp/admin.kubeconfig
  • Create provider ws kubectl kcp ws create provider --enter
  • Create an APIExport claiming namespaces:
cat <<EOF | kubectl apply -f -
apiVersion: apis.kcp.io/v1alpha1
kind: APIExport
metadata:
  name: example
spec:
  latestResourceSchemas: []
  permissionClaims:
    - resource: namespaces
      all: true
EOF
  • Go to your ws kubectl ws
  • Bind the APIExport example kubectl kcp bind apiexport root:provider:example
  • Get your workspace id export MYWS=$(kubectl get namespace default -ojsonpath="{.metadata.annotations['kcp\.io/cluster']}")
  • Go back to the provider ws kubectl ws root:provider
  • Allow provider admin to access views
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: default
rules:
  - apiGroups:
      - apis.kcp.io
    resources:
      - apiexports/content
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: default
subjects:
  - apiGroup:
    kind: ServiceAccount
    name: default
    namespace: default
EOF
  • Get the APIExport view URL export VWURL=$(kubectl get apiexport example -ojsonpath='{.status.virtualWorkspaces[0].url}')
  • Get the provider token export PROVIDER_TOKEN=$(kubectl get secret default-token-hl48b -ojsonpath='{.data.token}' | base64 -D)
  • Get the list of your workspace namespace via the view:
kubectl --server=$VWURL/clusters/* --token=$PROVIDER_TOKEN get namespaces
No resources found
  • Create a new namespace:
kubectl --server=$VWURL/clusters/$MYWS --token=$PROVIDER_TOKEN create namespace boo
namespace/boo created

Expected Behaviour

I would expect a 503 (or 404) for both operations.

Additional Context

No response

@lionelvillard lionelvillard added the kind/bug Categorizes issue or PR as related to a bug. label Feb 20, 2023
@stevekuznetsov
Copy link
Contributor

stevekuznetsov commented Feb 20, 2023

Looks like TestAPIExportPermissionClaims in test/e2e/virtual/apiexport/virtualworkspace_test.go does not fully handle these cases. First, it claims:

t.Logf("Verify that we get empty lists for all claimed resources (other than apibindings) because the claims have not been accepted yet")

Realistically, shouldn't this be a 403? We need to indicate to the provider somehow that no claims have been accepted, although it's not clear to me that this would work with a *-cluster request unless all claims had not been accepted. Likely we need to consider if it's really that valid and useful to allow un-accepted claims and, if so, how we want to expose this to the APIExport provider.

Second, it does not attempt to do any mutating requests before the claims are accepted. I think it will be more appropriate (or the only appropriate thing, once we have scoped claims?) for us to dynamically handle the set of a) resources and b) verbs on those resources that our forwarding storage allows as a function of the accepted claims? Although, this would mean that discovery is per-cluster, which ... unclear if we can support that or if clients would know what to do.

@sttts
Copy link
Member

sttts commented Feb 21, 2023

The namespace creation looks odd, would expect it to be 403 (if I don't miss anything).

The * list is exactly as expected. It is not by bound workspaces, but has to answer uniformely and continously, i.e. empty list 200.

@s-urbaniak
Copy link
Contributor

The prerequisite to fix this is to introduce admission in the virtual apiserver view which is done in #2456, see #2456 (comment).

Once admission in the virtual apiserver view lands, checks can be implemented there to handle the case described here.

@ncdc ncdc removed their assignment Jan 29, 2024
@kcp-ci-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kcp-ci-bot kcp-ci-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2024
@kcp-ci-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kcp-ci-bot kcp-ci-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
Status: Next
Development

No branches or pull requests

6 participants