Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote Command Executor Re-Using Stdin/Stdout Results in Lost Data #1325

Open
adrianosela opened this issue Dec 22, 2023 · 2 comments
Open
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@adrianosela
Copy link

adrianosela commented Dec 22, 2023

I am building a proxy between ssh and kubectl exec using the remotecommand library.
One of the neat things I'd like to do is deduce what shell to drop you in automatically without having to configure that.

So I am simply attempting to iterate over well-known shells (commands to use for the exec) as follows:

	shells := []string{"bash", "zsh", "ash", "sh"}
	shellSet := set.New(shells...)
	for _, shell := range shells {
		if shellSet.Size() == 0 {
			channel.Write([]byte("No shells available in the target container :("))
			s.logger.Error("no shells available in the target container", zap.Error(err))
			return
		}

		exec, err := getRemoteCommandExecutor(
			ctx,
			kubeconfig,
			user,
			shell,
			target,
		)
		if err != nil {
			s.logger.Error(
				"failed to get remote command executor for target",
				zap.String("namespace", target.namespace),
				zap.String("pod", target.pod),
				zap.String("container", target.container),
				zap.String("user", user),
				zap.String("shell", shell),
				zap.Error(err),
			)
			return
		}

		err = exec.StreamWithContext(ctx, remotecommand.StreamOptions{
			Stdin:             channel,
			Stdout:            channel,
			Stderr:            channel.Stderr(),
			Tty:               true,
			TerminalSizeQueue: terminalSizeQ,
		})
		if err != nil {
			if strings.Contains(err.Error(), "executable file not found") ||
				strings.Contains(err.Error(), "command terminated with exit code 127") {
				shellSet.Remove(shell)
				continue // try next shell
			}
			if !errors.Is(err, io.EOF) && !errors.Is(err, context.Canceled) {
				s.logger.Error(
					"failed to stream between ssh channel and target container",
					zap.String("namespace", target.namespace),
					zap.String("pod", target.pod),
					zap.String("container", target.container),
					zap.String("user", user),
					zap.String("shell", shell),
					zap.Error(err),
				)
			}
		}
		return
	}
}

This works great when the first shell in the array (bash) is present in the container, otherwise we move on to the next one. The problem I'm facing is that there seem to be bytes lost when I re-use the same ssh channel object (stdin and stdout for the remote command executor) across iterations. Its not a matter of what shell im using, the problem exists as long as we iterate at least once in the loop.

I looked at the code in this repo and found under the hood its just an io.Copy goroutine for each file descriptor / io.Reader/io.Writer...

So I assumed that the io.Copy routine simply keeps hold of the reader longer than needed....

I tried wrapping my stdin/stdout in custom readers/writers to duplicate the last read/write each time we iterate but that did not solve the problem...

I am out of things to try - any thoughts? I'm not entirely sure this is a bug in the code - maybe it is, maybe it isnt. Maybe re-use of the stdin/stdout io.Reader/Writers is something not many have attempted before so it hasnt been an issue for anyone?

This code is used in the open sauce repo here https://github.com/borderzero/border0-cli/blob/main/internal/ssh/session/kubectl_exec_session.go

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants