Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] inlets client failed to connect when exposing port is 8080 #90

Open
zechen0 opened this issue Jul 18, 2020 · 2 comments
Open

[Bug] inlets client failed to connect when exposing port is 8080 #90

zechen0 opened this issue Jul 18, 2020 · 2 comments

Comments

@zechen0
Copy link
Contributor

zechen0 commented Jul 18, 2020

Expected Behaviour

The inlets client in k8s cluster should connect to inlets server successfully.

Current Behaviour

The inlets client in k8s cluster fails to connect inlets server.

Note that if we use inlets client outside k8s cluster, the connection will succeed, meaning the inlets server is working just fine.

Possible Solution

Steps to Reproduce (for bugs)

kind create cluster

# Install inlets-operator
kubectl apply -f ./artifacts/crds/
kubectl create secret generic inlets-access-key --from-literal inlets-access-key=<Linode Access Token>
helm install --set provider=linode,region=us-east inlets-operator ./chart/inlets-operator

# Install and expose echoserver
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
cat <<EOF | kubectl create -f -                      
apiVersion: v1  
kind: Service
metadata:
  labels:
    run: echoserver
  name: echoserver
  namespace: default
spec:
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    run: echoserver
  type: LoadBalancer
EOF

Logs

kubectl logs deployment.apps/echoserver-tunnel-client     
2020/07/18 10:25:36 Welcome to inlets.dev! Find out more at https://github.com/inlets/inlets
2020/07/18 10:25:36 Starting client - version 2.7.3
2020/07/18 10:25:36 Upstream:  => http://echoserver:8080
2020/07/18 10:25:36 Token: "p9DzT5gtqubn0EefdK9f9IxgEXjRGAXx57tiEnwR2henRhu7XovASoOvcHhklq0i"
time="2020/07/18 10:25:36" level=info msg="Connecting to proxy" url="ws://45.79.148.100:8080/tunnel"
time="2020/07/18 10:25:36" level=error msg="Failed to connect to proxy. Response status: 200 - 200 OK. Response body: CLIENT VALUES:\nclient_address=10.244.0.1\ncommand=GET\nreal path=/tunnel\nquery=nil\nrequest_version=1.1\nrequest_uri=http://45.79.148.100:8080/tunnel\n\nSERVER VALUES:\nserver_version=nginx: 1.10.0 - lua: 10001\n\nHEADERS RECEIVED:\nauthorization=Bearer p9DzT5gtqubn0EefdK9f9IxgEXjRGAXx57tiEnwR2henRhu7XovASoOvcHhklq0i\nconnection=Upgrade\nhost=45.79.148.100:8080\nsec-websocket-key=b5J+fZqRtaHdOHjv3hZ/7w==\nsec-websocket-version=13\nupgrade=websocket\nuser-agent=Go-http-client/1.1\nx-inlets-id=dcb2f0ffac964928a06bd64aaf5d60e8\nx-inlets-upstream==http://echoserver:8080\nBODY:\n-no body in request-" error="websocket: bad handshake"

Context

Your Environment

  • Can be reproduced in different k8s environments: Linode, DigitalOcean, kinD.
  • Can be reproduced using different apps, e.g. echoserver, openfaas
  • Can only be reproduced with free version of inlets, unable to reproduce with inlets PRO
@zechen0
Copy link
Contributor Author

zechen0 commented Jul 18, 2020

The root cause may be that iptables rules are set up against port 8080. These rules can be seen by logging into k8s nodes.

As we can see below, the 45.79.148.100:8080 will eventually go to 10.244.0.6:8080 (which is the application pod)

root@kind-control-plane:/# iptables-save | grep -E '4AM65LVENFDJEVZH|M3SSESRYD7TVHMEI|8080'
:KUBE-FW-M3SSESRYD7TVHMEI - [0:0]
:KUBE-SEP-4AM65LVENFDJEVZH - [0:0]
:KUBE-SVC-M3SSESRYD7TVHMEI - [0:0]
-A KUBE-FW-M3SSESRYD7TVHMEI -m comment --comment "default/echoserver:http loadbalancer IP" -j KUBE-MARK-MASQ
-A KUBE-FW-M3SSESRYD7TVHMEI -m comment --comment "default/echoserver:http loadbalancer IP" -j KUBE-SVC-M3SSESRYD7TVHMEI
-A KUBE-FW-M3SSESRYD7TVHMEI -m comment --comment "default/echoserver:http loadbalancer IP" -j KUBE-MARK-DROP
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/echoserver:http" -m tcp --dport 32306 -j KUBE-SVC-M3SSESRYD7TVHMEI
-A KUBE-SEP-4AM65LVENFDJEVZH -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-4AM65LVENFDJEVZH -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:8080
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.76.33/32 -p tcp -m comment --comment "default/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.76.33/32 -p tcp -m comment --comment "default/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-SVC-M3SSESRYD7TVHMEI
-A KUBE-SERVICES -d 45.79.148.100/32 -p tcp -m comment --comment "default/echoserver:http external IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 45.79.148.100/32 -p tcp -m comment --comment "default/echoserver:http external IP" -m tcp --dport 8080 -m physdev ! --physdev-is-in -m addrtype ! --src-type LOCAL -j KUBE-SVC-M3SSESRYD7TVHMEI
-A KUBE-SERVICES -d 45.79.148.100/32 -p tcp -m comment --comment "default/echoserver:http external IP" -m tcp --dport 8080 -m addrtype --dst-type LOCAL -j KUBE-SVC-M3SSESRYD7TVHMEI
-A KUBE-SERVICES -d 45.79.148.100/32 -p tcp -m comment --comment "default/echoserver:http loadbalancer IP" -m tcp --dport 8080 -j KUBE-FW-M3SSESRYD7TVHMEI
-A KUBE-SVC-M3SSESRYD7TVHMEI -j KUBE-SEP-4AM65LVENFDJEVZH

The resources and their IP addresses:

kubectl get all -o wide
NAME                                            READY   STATUS    RESTARTS   AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
pod/echoserver-5d9b4777bc-bzhn4                 1/1     Running   0          61m   10.244.0.6    kind-control-plane   <none>           <none>
pod/echoserver-tunnel-client-5d8b8dc45f-6pz8t   1/1     Running   0          32m   10.244.0.11   kind-control-plane   <none>           <none>
pod/inlets-operator-58558dd5cc-njxc5            1/1     Running   0          59m   10.244.0.7    kind-control-plane   <none>           <none>

NAME                 TYPE           CLUSTER-IP    EXTERNAL-IP                   PORT(S)          AGE   SELECTOR
service/echoserver   LoadBalancer   10.96.76.33   45.79.148.100,45.79.148.100   8080:32306/TCP   33m   run=echoserver
service/kubernetes   ClusterIP      10.96.0.1     <none>                        443/TCP          69m   <none>

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS        IMAGES                                        SELECTOR
deployment.apps/echoserver                 1/1     1            1           61m   echoserver        gcr.io/google_containers/echoserver:1.4       run=echoserver
deployment.apps/echoserver-tunnel-client   1/1     1            1           32m   client            inlets/inlets:2.7.3                           app.kubernetes.io/name=echoserver-tunnel-client
deployment.apps/inlets-operator            1/1     1            1           59m   inlets-operator   inlets-operator:latest   app.kubernetes.io/instance=inlets-operator,app.kubernetes.io/name=inlets-operator

NAME                                                  DESIRED   CURRENT   READY   AGE   CONTAINERS        IMAGES                                        SELECTOR
replicaset.apps/echoserver-5d9b4777bc                 1         1         1       61m   echoserver        gcr.io/google_containers/echoserver:1.4       pod-template-hash=5d9b4777bc,run=echoserver
replicaset.apps/echoserver-tunnel-client-5d8b8dc45f   1         1         1       32m   client            inlets/inlets:2.7.3                           app.kubernetes.io/name=echoserver-tunnel-client,pod-template-hash=5d8b8dc45f
replicaset.apps/inlets-operator-58558dd5cc            1         1         1       59m   inlets-operator   inlets-operator:latest   app.kubernetes.io/instance=inlets-operator,app.kubernetes.io/name=inlets-operator,pod-template-hash=58558dd5cc

@alexellis
Copy link
Member

Pinging @mauilion for ideas

The server is running on a VM with a remote IP. It accepts traffic on port 80, then uses its websocket to talk to the inlets client (a Deployment on the local Kubernetes cluster)

The "upstream" URL for the inlets client is like the upstream value of a reverse proxy, so it will send any requests received from the server there.

The issue appears to be before we get that far, with the client trying to do a HTTP connection to ws://45.79.148.100:8080/tunnel - instead of hitting the external VM's IP on port 8080, it gets booted over to the service "echoserver" which also listens on port 8080.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants