Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

oc doesn't time out watches on target server disappearing #11038

Closed
jimmidyson opened this issue Sep 21, 2016 · 5 comments
Closed

oc doesn't time out watches on target server disappearing #11038

jimmidyson opened this issue Sep 21, 2016 · 5 comments
Assignees
Labels
component/cli lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/P2

Comments

@jimmidyson
Copy link
Contributor

oc watches don't handle target server disappearing ungracefully (e.g. power off).

Version
$openshift version                         

openshift v1.3.0-alpha.2+10133fa
kubernetes v1.3.0+57fb9ac
etcd 2.3.0+git

Server https://192.168.42.116:8443
OpenShift v1.3.0
Kubernetes v1.3.0+52492b4
Steps To Reproduce
  1. Start OpenShift server & log in
  2. oc get pods -w
  3. Power off OpenShift host
Current Result

oc client does not close

Notice that netstat -anp|grep 8443 shows connection still established.

Expected Result

oc client should close connections & report error (or potentially reconnect transparently).

Additional Information

I assume this is because WS read timeouts are set to infinity. Would be better to implement WS ping (I've confirmed API server handles this properly) & set a reasonable read timeout.

@liggitt
Copy link
Contributor

liggitt commented Sep 21, 2016

oc doesn't use websockets, it uses chunked streaming connections. Not sure what the client side timeout on the connection is

@jimmidyson
Copy link
Contributor Author

Ah I didn't know that. Exhibits same behaviour though from what I can see so assume read timeout is being disabled - that would be valid of course, receiving no data for a long time would be fine if connection was still actually valid.

@juanvallejo
Copy link
Contributor

Adding a client request timeout here: #11104

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 7, 2018
@juanvallejo
Copy link
Contributor

@jimmidyson a client-side request timeout has been added.
I am closing this issue, but please re-open if it is still not fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cli lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/P2
Projects
None yet
Development

No branches or pull requests

7 participants