Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s-dashboard exec is giving black screen intermittently & sometime with error - Sending Error: Error: http status 404 #8838

Open
shnigam2 opened this issue Mar 26, 2024 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@shnigam2
Copy link

What happened?

When try to exec into pod getting black screen intermittently & sometime with error - Sending Error: Error: http status 404.

Error on dashboard pod logs
2024/03/26 09:31:23 handleTerminalSession: can't Recv: sockjs: session not in open state

What did you expect to happen?

Expectation is exec should work everytime and able to get pod session on every attempt.

How can we reproduce it (as minimally and precisely as possible)?

EKS version we tried and observed same behaviour - 1.26.7 & 1.28.5.
Dashboard version - 6.0.8
chart: kubernetes-dashboard
repoURL: https://kubernetes.github.io/dashboard/
Observed this issue when running 2 replicas on dashboard pod. With single replica this issue is not reporting. Opened case with openunison as well OpenUnison/openunison-k8s#105

Anything else we need to know?

Detailed info about openunison and ingress is updated in OpenUnison/openunison-k8s#105

What browsers are you seeing the problem on?

Chrome, Safari

Kubernetes Dashboard version

6.0.8

Kubernetes version

1.26.7 & 1.28.5

Dev environment

NA

@shnigam2 shnigam2 added the kind/bug Categorizes issue or PR as related to a bug. label Mar 26, 2024
@shnigam2
Copy link
Author

@floreks Could you please suggest on this, seems like issue was started reporting #8771

@dagobertdocker
Copy link

Hi @floreks can I offer any specific details/logs/tcp traces for your analysis?

@floreks
Copy link
Member

floreks commented Apr 3, 2024

On our side, it's quite simple. API exposes the WebSocket endpoint and frontend connects to it. In case you are using some kind of additional proxies you need to make sure that they can reliably forward such connections and maintain them. I can imagine that if the connection is dropped and/or retried then the next exec can be forwarded to a different API pod than the previous one. It could potentially result in an error. On our side, the only thing I can think of we could improve is to add a few retry attempts on the frontend side so that you don't have to reload the page. This is only an improvement though. Everything else has to be done on your side as part of your proxy configuration.

@dagobertdocker
Copy link

dagobertdocker commented Apr 4, 2024

Observation: when I scale api deployment to a single replica (as the default used to be 3, and which has now been changed in helm chart for 7.1.3, it works consistently.

@floreks
Copy link
Member

floreks commented Apr 4, 2024

That's expected. If the connection will be dropped at any point by i.e. your proxy then it will always reconnect to the same pod that keeps the connection open internally for some time. With more than 1 replica there is no guarantee that the request will be forwarded to the same pod to reconnect. It would always have to create a new connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants