Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't seem to make it work with existing ingress in GKE #48

Open
swelljoe opened this issue Jan 19, 2021 · 42 comments
Open

Can't seem to make it work with existing ingress in GKE #48

swelljoe opened this issue Jan 19, 2021 · 42 comments

Comments

@swelljoe
Copy link

swelljoe commented Jan 19, 2021

I'm trying to get Verdaccio running in an existing cluster, which is working for other services e.g. a pypiserver with the following service yaml (this one works in my cluster, note ClusterIP is not defined in my local service yaml, it's being filled in when the service is created with kubectl apply):

spec:
  clusterIP: 10.4.13.73
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: pypiserver
  sessionAffinity: None
  type: ClusterIP

And, when I look at the service page for this one, it shows port of 80 and target port of 8080.

But, Verdaccio installed with helm, while it can be reached if I setup a local port forward, isn't working with the ingress on the public IP. The verdaccio yaml in GKE looks like (this service does not work, despite looking, to me very similar to the above):

spec:
  clusterIP: 10.4.6.96
  ports:
  - port: 4873
    protocol: TCP
    targetPort: http
  selector:
    app: verdaccio
    release: npmserver
  sessionAffinity: None
  type: ClusterIP

When I look at the service page for this one, it shows port of 4873 and target port of 0, so that feels like maybe a problem, somehow. I don't see any way to explicitly set targetPort (and it seems like that ought to be automagic since helm is setting up the service and knows where it runs better than I do, anyway). I think I'm misunderstanding something, but I can't find any clues for what after pretty extensive googling. I'm still pretty new to Kubernetes, though, so it may be obvious to someone else what I'm doing wrong.

My values.yaml contains:

service:
  annotations:
  clusterIP: ""
  externalIPs:
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  port: 4873
  type: ClusterIP
  # nodePort: 31873

I've also tried explicitly setting the externalIPs and loadBalancerIP, but that didn't seem to work either, and those aren't specified for my other services that are working, AFAICS, so I don't think they should be needed here, either?

Anybody have a clue they can lend me for how this is supposed to be configured with a GKE Ingress?

@jfrancisco0
Copy link
Collaborator

Hey! Did you configure the ingress section of your values.yaml? Specifically, did you enable the ingress? - https://github.com/verdaccio/charts/blob/master/charts/verdaccio/values.yaml#L44

You can check if the ingress exists with kubectl get ingress -n <your_namespace>

@swelljoe
Copy link
Author

I did not! I thought that meant it would create an ingress, but I already have the ingress?

@jfrancisco0
Copy link
Collaborator

Did you create it independently? You seem to have configured the service, but not necessarily the ingress. You can try configuring it in the values.yaml file like:

ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: "<your_ingress_controller>"
      hosts:
      - <your.host.com>

@swelljoe
Copy link
Author

The ingress is preexisting and serves a bunch of other services.

I'll try that.

@swelljoe
Copy link
Author

But, I'm not sure how to find out what the ingress.class should be? It's not nginx. It's the GKE...native ingress thing, I don't even know what it's called and googling isn't helping here (I didn't set this cluster up).

I noticed some of my (well, not mine, my company's) other services have an annotation of cloud.google.com/neg: '{"ingress": true}', but some do not.

@swelljoe
Copy link
Author

OK, maybe it's gce-internal, I'm trying that.

As an aside, is there a way to reload configuration from values.yaml of the running pod without uninstall/installing it again? helm docs aren't helping me on that.

@swelljoe
Copy link
Author

Hmm, that seems to have created a new ingress, which is not what I need to happen. I guess I'm still not getting something fundamental here.

@jfrancisco0
Copy link
Collaborator

That will depend on the tools you're using. Assuming it's Helm, something like helm upgrade --reuse-values <release_name> verdaccio/verdaccio -n <namespace> --set ingress.enabled=true should work.

@swelljoe
Copy link
Author

Yes, it's helm. I don't want to reuse values, though? I want to apply the new ones? I'm trying to test different settings without having to tear down and build up again.

@jfrancisco0
Copy link
Collaborator

The upgrade command should replace the values after --set, so you don't have to redeploy just to change one value. I think you can also do it with a file, instead of the --set flag. Please check Helm documentation.

It doesn't seem to be an issue with Verdaccio, though. You don't necessarily need to create the ingress from the Helm chart, if you have an independent one, and it's properly configured. Can you discuss that with your cluster manager?

@swelljoe
Copy link
Author

Yeah, we're (me and the person who created the cluster/ingress and other services) currently both stumped.

I definitely don't need an ingress to be created by helm. I just can't figure out how to make it talk to the existing one. I guess I need to download the whole chart and modify that targetPort to see if I can make it act like our existing services (they are not configured with helm, just kubernetes yaml files). I'm kind of out of ideas for what else could be preventing it from working when local port forwarding to 4873 does work.

@swelljoe
Copy link
Author

Just so I'm clear, the ingress: true is (as I originally thought) used to make helm create an ingress, correct? So, since I need to use an existing one, I should go back to false for that.

@jfrancisco0
Copy link
Collaborator

jfrancisco0 commented Jan 19, 2021

Yes, if you have an ingress already you may use it, if properly configured. In that case, keep the ingress from the chart disabled.

@swelljoe
Copy link
Author

Thank you! I'll keep banging on it. I don't think I'm significantly further ahead of where I was before, but I guess I have a next step I can try (changing port and targetPort in the templates). Gonna sleep some and hope it makes more sense in the morning.

@kav
Copy link
Collaborator

kav commented Jan 19, 2021

Hey y'all I've done a lot of work with the GKE ingress controllers so should be able to help untangle this for you. First @swelljoe when you say an ingress already exists do you mean an ingress controller or did you create an ingress resource for Verdaccio already. If so can you share it, feel free to remove anything sensitive of course.

Regarding the service; any service you expose in GKE with an ingress should have the service annotation for cloud.google.com/neg: '{"ingress": true}' unless you are using a GKE version that does it automatically. I've left it off and things work... mostly, but talking to the google team it's better to have as you get container native load balancing. More here https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing

For the service values. In general I would strongly recommend not specifying the stuff you aren't using and perhaps even consider leaving out anything that's the same as the default since they will be merged. Since the default is a ClusterIP service you really only need to specify the annotations. It's possible that some of the default empty values are triggering template paths they shouldn't which may be our bug. We really shouldn't spec them in the default either.

service:
  annotations:
    loud.google.com/neg: '{"ingress": true}'

In the event you don't have the ingress resource already this should get you what you want

ingress:
  enabled: true
  annotations:
    # if you want a VPC internal ingress, for external either remove the annotation or use "gce"
    kubernetes.io/ingress.class: "gce-internal" 
  hosts:
    - <your.host.com>
  paths:
    - "/*" # this is due to gce ingress needing a glob where nginx ingress doesn't

This is all assuming of course we don't have any chart bugs which is quite possible as well of course

@kav
Copy link
Collaborator

kav commented Jan 19, 2021

Ah you may also need kubernetes.io/ingress.allow-http: "false" as an annotation on the ingress depending on TLS config. Easiest way to tell is describe the ingress and it'll tell you to add it.

@swelljoe swelljoe changed the title Can't seem to make ingress work in GKE Can't seem to make it work with existing ingress in GKE Jan 19, 2021
@swelljoe
Copy link
Author

swelljoe commented Jan 19, 2021

Thanks, @kav . I'm working with an existing ingress.

I'd already gone back to having the cloud.google.com/neg: '{"ingress": true}' bit in there (I found some stuff on StackOverflow regarding that and it seems necessary, so I kept it). That didn't change anything about the problem I have getting ingress to connect.

My ingress looks like this (condensed and sanitized):

apiVersion: extensions/v1beta1
kind: Ingress
spec:
  rules:
  - host: npmserver.myhost.com
    http:
      paths:
      - backend:
          serviceName: npmserver-verdaccio
          servicePort: 4873
        path: /*
  tls:
  - secretName: my-secret

Which works for all of our other services, but not this one. The service shows OK, but the ingress treats it as unhealthy.

One thing to note is all of our other services have servicePort: 80. I tried that for Verdaccio but it doesn't seem to make a difference...and I think I'd need to change the port Vedaccio runs on to make that work and don't see how to do that (when I change the service.port in values.yaml, even a local port forward does not work, so I think there's some configuration missing).

@jfrancisco0
Copy link
Collaborator

jfrancisco0 commented Jan 19, 2021

Is this a typo in the ingress, or just in your comment? serviceName: vpmserver-verdaccio I'm guessing it should be serviceName: npmserver-verdaccio.

@swelljoe
Copy link
Author

swelljoe commented Jan 19, 2021

Er, just a typo in the comment. It was correct in the actual ingress.

@jfrancisco0
Copy link
Collaborator

My ingress, created from the Helm chart, for AWS/EKS, looks something like this:

apiVersion: extensions/v1beta1
kind: Ingress
spec:
    rules:
    - host: npm.domain.com
      http:
        paths:
        - backend:
            serviceName: verdaccio-verdaccio
            servicePort: 4873
          path: /
          pathType: ImplementationSpecific
    tls:
    - hosts:
      - npm.domain.com
      secretName: sec-npm
  status:
    loadBalancer:
      ingress:
      - hostname: domain.amazonaws.com

@swelljoe
Copy link
Author

Can I ask what your helm-generated Service looks like? I suspect that's where mine is falling down...my working services look like (stripped of non-ingress related stuff):

apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
  name: pypiserver
spec:
  clusterIP: 10.4.13.73
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: pypiserver
  type: ClusterIP

While the helm-generated one looks like this (port: 4873 and targetPort: http is where my confusion lies...since all my working ones have port 80 targetting the service port, like 8080, where this chart is producing a port of 4783 and target port of http which seems to become 0 as far as GKE is concerned):

apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
  name: npmserver-verdaccio
  selfLink: /api/v1/namespaces/production/services/npmserver-verdaccio
spec:
  clusterIP: 10.4.7.145
  ports:
  - port: 4873
    protocol: TCP
    targetPort: http
  selector:
    app: verdaccio
    release: npmserver
  type: ClusterIP

@jfrancisco0
Copy link
Collaborator

Could this be a namespace issue? Is your service in the same namespace as the ingress? My service looks pretty similar to yours.

@swelljoe
Copy link
Author

Yes, namespace is definitely correct.

@jfrancisco0
Copy link
Collaborator

Can you share the output of kubectl get ingress -n <namespace> and kubectl get svc -n <namespace>?

@swelljoe
Copy link
Author

svc (all the unrelated stuff removed):

npmserver-verdaccio   ClusterIP      10.4.7.145    <none>          4873/TCP                                     48m
pypiserver            ClusterIP      10.4.13.73    <none>          80/TCP                                       174d

ingress is complicated and not sensible looking enough to post (it's got a bazillion hosts, as it's providing ingress for a bunch of services), but I can say with 100% certainty that npmserver-verdaccio is among them. I notice it has PORTS of 80, 443 and all of my other (working) services are also using port 80.

@swelljoe
Copy link
Author

swelljoe commented Jan 19, 2021

Just for completeness, ingress cleaned up and sanitized:

NAME         HOSTS                                                         ADDRESS       PORTS     AGE
my-ingress   npmserver.mydomain.com,pypiserver.mydomain.com + 47 more...   34.99.99.99   80, 443   574d

@jfrancisco0
Copy link
Collaborator

Hum, looks similar to mine. You may want to further sanitize and remove the public IP address from your last comment.

@jfrancisco0
Copy link
Collaborator

Can you replace targetPort: http from your service with targetPort: 4873? Seems to get its name here https://github.com/verdaccio/charts/blob/master/charts/verdaccio/templates/deployment.yaml#L43

@jfrancisco0
Copy link
Collaborator

By the way, does npmserver.mydomain.com exist? Is it registered/does your DNS service translate it properly? Sorry, there's just a lot of options where this can be wrong.

@swelljoe
Copy link
Author

Yes. DNS is correct.

And, yeah, that's my next thing to try (replacing targetPort and port in the templates) as that's the only thing I can see that differs from my other services that work. I don't know how to do that, still reading helm docs about how to install from a local chart directory or how to override one template file with a local one. (This is the first time I've ever used helm, still learning.)

@swelljoe
Copy link
Author

And, the health check is failing, which doesn't use DNS. It's definitely a problem in the ingress<->service somewhere.

@jfrancisco0
Copy link
Collaborator

I think you may be able to achieve that with kubectl edit service npmserver-verdaccio -n <namespace>

@jfrancisco0
Copy link
Collaborator

I'm not sure the healthcheck uses the ingress at all

@swelljoe
Copy link
Author

I didn't even think about editing it directly for testing! I've been fighting with this too long to think clearly.

Weirdly the Service itself shows healthy. It's only the...I don't even know what to call it, in the ingress, that is showing unhealthy.

@jfrancisco0
Copy link
Collaborator

If that doesn't work, can you share the metadata section (annotations, labels) of your service and ingress? Can you check the kubernetes.io/ingress.class annotation on the ingress is correctly set? Try comparing it to other services you may be running that work correctly.

@swelljoe
Copy link
Author

That fixed it! Thank you!

So...it seems like it'd be useful to allow modifying targetPort in values.yaml? I can probably make a PR to do that (will take me a day or so to wrap my head around the template language and how all the pieces fit together, but I think I can make it go), if that'd be helpful.

@jfrancisco0
Copy link
Collaborator

Glad to hear that! Hum, maybe we should check first why it's failing to translate a named port. Then if it's normal behaviour (if for some reason your type of deployment doesn't work with named ports), a PR would be very welcome :)

@swelljoe
Copy link
Author

I don't know enough to know what's normal. But, once I changed the service to be port: 80 and targetPort: 4873 health check started being happy and I was able to reach it on the public address and everything is now working.

@jfrancisco0
Copy link
Collaborator

And if you change port: 80 but keep targetPort: http, it doesn't work?

@swelljoe
Copy link
Author

Would that be the same as changing the service.port in values.yaml? That did not work...but when I did that I couldn't even connect to verdaccio via a port-forward (which did work through all of this as long as port was 4873), so I think that maybe broke something else.

@jfrancisco0
Copy link
Collaborator

Yes, try setting service.port: 80 and let targetPort take its default value of http.

Since, according to what you shared here before, your ingress is configured withservicePort: 4873, I find it weird that you need to set your service to port 80 for it to work. It would make sense for the servicePort on your ingress to match service.port.

@todpunk
Copy link
Contributor

todpunk commented Jan 22, 2021

Do you happen to have any other deployments/pods in the namespace who are defining the port name of http? Could be a collision there. The Verdaccio chart sets the port name http to 4873 but I don't know off hand the scoping rules of that name.

- containerPort: 4873
name: http

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants