Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API fails to fetch EtcdRestores (and other resources?) in dedicated master/seed architecture #5958

Closed
embik opened this issue May 12, 2023 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/regression Categorizes issue or PR as related to a regression from a prior release. lifecycle/rotten Denotes an issue or PR that has aged beyond stale. sig/api Denotes a PR or issue as being assigned to SIG API.

Comments

@embik
Copy link
Member

embik commented May 12, 2023

What happened

When setting up a dedicated master/seed architecture with KKP 2.22.2 (or the latest release branch commits for release/v.22), the UI returns errors on the project overview page. The kubermatic-api pods involved log the following error:

{"level":"debug","time":"2023-05-12T12:21:42.319Z","caller":"kubermatic-api/main.go:599","msg":"response","body":"{\"error\":{\"code\":500,\"message\":\"no matches for kind \\\"EtcdRestore\\\" in version \\\"kubermatic.k8c.io/v1\\\"\"}}\n","status":500,"uri":"/api/v2/projects/f465kk5nv6/etcdrestores"}

@pkprzekwas and I have been debugging this, and we believe this is happening due to the shared REST mapper injected into all providers as the error is thrown by discovery involving the REST mapper in question:

etcdRestoreProjectProviderGetter := kubernetesprovider.EtcdRestoreProjectProviderFactory(mgr.GetRESTMapper(), seedKubeconfigGetter)

This shared REST mapper utilises the master cluster for information about resources, even when the clients generated with a reference to the REST mapper are for seeds. That decision made sense when it was made - all CRDs lived on both master and seed. Unfortunately, this assumption was invalidated by kubermatic/kubermatic#10903, which was shipped starting with KKP 2.22.0.

As a note: Usage of the master REST mapper was also a problem in kubermatic/kubermatic, e.g. as described in kubermatic/kubermatic#12188.

This is kind of part of kubermatic/kubermatic#12186.

Expected behavior

The API is capable of listing EtcdRestores and all other resources that only live on seed clusters.

How to reproduce

  • Set up KKP 2.22 with separate/dedicated master and seed clusters
  • Create a Project from the dashboard
  • View that project; the error message should happen

Environment

  • UI Version: v2.22.2
  • API Version: v2.22.2
  • Domain: n/a
  • Others:

Current workaround

Apply all CRDs to master cluster.

Affected user persona

Users with large scale KKP setups as described in our documentation.

Business goal to be improved

Metric to be improved

@embik embik added kind/bug Categorizes issue or PR as related to a bug. sig/ui Denotes a PR or issue as being assigned to SIG UI. sig/api Denotes a PR or issue as being assigned to SIG API. kind/regression Categorizes issue or PR as related to a regression from a prior release. and removed sig/ui Denotes a PR or issue as being assigned to SIG UI. labels May 12, 2023
@embik
Copy link
Member Author

embik commented May 15, 2023

Note: I'm not sure we can fix this on short notice, I've started a thread to discuss if we want to fix this here on the dashboard side or on the kubermatic/kubermatic side.

@embik
Copy link
Member Author

embik commented Jun 2, 2023

While we fixed this by bringing back all CRDs, I believe it still makes sense to progress on decoupling. As it stands, this issue is a prerequisite for bringing back the CRD split.

@kubermatic-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubermatic-bot kubermatic-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2024
@kubermatic-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubermatic-bot kubermatic-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 13, 2024
@kubermatic-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

@kubermatic-bot
Copy link
Contributor

@kubermatic-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/regression Categorizes issue or PR as related to a regression from a prior release. lifecycle/rotten Denotes an issue or PR that has aged beyond stale. sig/api Denotes a PR or issue as being assigned to SIG API.
Projects
None yet
Development

No branches or pull requests

2 participants