Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GKE: can't get things to work when ingress: enabled: true #62

Open
Splaktar opened this issue May 6, 2021 · 27 comments
Open

GKE: can't get things to work when ingress: enabled: true #62

Splaktar opened this issue May 6, 2021 · 27 comments

Comments

@Splaktar
Copy link

Splaktar commented May 6, 2021

Similar to #48, I'm having a lot of problems with getting this to work with GKE (1.18.16-gke.2100). However, I'm starting with a brand new cluster with no existing ingresses.

If I take https://github.com/verdaccio/charts/blob/master/charts/verdaccio/values.yaml and copy it locally and only change the following

ingress:
  enabled: true

Then run

helm install npm -f values.yaml verdaccio/verdaccio

I get

NAME: npm
LAST DEPLOYED: Thu May  6 00:41:02 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:

Then in my pod, I start to see the following:

warn --- invalid address - http://0.0.0.0:tcp://xx.xxx.xxx.xx:4873, we expect a port (e.g. "4873"),
host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

In the service UI, I see this

Screen Shot 2021-05-06 at 00 50 26

If I leave ingress enabled set to false, everything spins up fine, but I can't get to it using an external IP.

@Splaktar
Copy link
Author

Splaktar commented May 6, 2021

I've tried adding

service:
  annotations: {
    cloud.google.com/neg: '{"ingress": true}'
  }

But that didn't help.

I tried changing to

  paths:
    - "/*"

But that didn't help.

@Splaktar
Copy link
Author

Splaktar commented May 6, 2021

The above was attempted using an Autopilot Cluster. I'm going to try again with a standard cluster.

Update: I saw the same behavior with the Standard clusters.

@todpunk
Copy link
Contributor

todpunk commented May 8, 2021

An ingress is useless without an ingress controller, and typically you'll want to add a host at the very least.

I'm guessing this is technically a bug in that it shouldn't pass the wrong values to the template and generate that service/ingress combo (I may have time to test this on Monday) but even if it passed compatible values you would still not get a working ingress.

Are you using an ingress controller? If so, which, and what annotations will allow it to map things to the path?

Can you provide the ingress and service descriptions? kubectl -n <namespace> describe service <servicename> and same for ingress.

@Splaktar
Copy link
Author

Splaktar commented May 8, 2021

$ kubectl describe service npm-verdaccio
Name:              npm-verdaccio
Namespace:         default
Labels:            app=npm-verdaccio
                   app.kubernetes.io/instance=npm
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=verdaccio
                   app.kubernetes.io/version=5.0.1
                   helm.sh/chart=verdaccio-4.0.0
Annotations:       cloud.google.com/neg: {"ingress": true}
                   cloud.google.com/neg-status:
                     {"network_endpoint_groups":{"4873":"k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4"},"zones":["us-central1-a","us-central1-b"]}
                   meta.helm.sh/release-name: npm
                   meta.helm.sh/release-namespace: default
Selector:          app.kubernetes.io/instance=npm,app.kubernetes.io/name=verdaccio
Type:              ClusterIP
IP:                10.106.128.180
Port:              <unset>  4873/TCP
TargetPort:        http/TCP
Endpoints:         10.106.0.130:4873
Session Affinity:  None
Events:
  Type    Reason  Age    From            Message
  ----    ------  ----   ----            -------
  Normal  Create  2m38s  neg-controller  Created NEG "k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4" for default/npm-verdaccio-k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4--/4873-http-GCE_VM_IP_PORT-L7 in "us-central1-a".
  Normal  Create  2m34s  neg-controller  Created NEG "k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4" for default/npm-verdaccio-k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4--/4873-http-GCE_VM_IP_PORT-L7 in "us-central1-b".
  Normal  Attach  40s    neg-controller  Attach 1 network endpoint(s) (NEG "k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4" in zone "us-central1-a")
$ kubectl describe ingress npm-verdaccio
Name:             npm-verdaccio
Namespace:        default
Address:          107.xxx.255.xxx
Default backend:  default-http-backend:80 (10.106.0.2:8080)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   npm-verdaccio:4873 (10.106.0.130:4873)
Annotations:  ingress.kubernetes.io/backends: {"k8s-be-31319--328e1eabe967e100":"HEALTHY","k8s1-328e1eab-default-npm-verdaccio-4873-f4839dd4":"UNHEALTHY"}
              ingress.kubernetes.io/forwarding-rule: k8s2-fr-x537ki4w-default-npm-verdaccio-0ou1tpqw
              ingress.kubernetes.io/target-proxy: k8s2-tp-x537ki4w-default-npm-verdaccio-0ou1tpqw
              ingress.kubernetes.io/url-map: k8s2-um-x537ki4w-default-npm-verdaccio-0ou1tpqw
              meta.helm.sh/release-name: npm
              meta.helm.sh/release-namespace: default
Events:
  Type    Reason     Age               From                     Message
  ----    ------     ----              ----                     -------
  Normal  Sync       7m41s             loadbalancer-controller  UrlMap "k8s2-um-x537ki4w-default-npm-verdaccio-0ou1tpqw" created
  Normal  Sync       7m39s             loadbalancer-controller  TargetProxy "k8s2-tp-x537ki4w-default-npm-verdaccio-0ou1tpqw" created
  Normal  Sync       7m35s             loadbalancer-controller  ForwardingRule "k8s2-fr-x537ki4w-default-npm-verdaccio-0ou1tpqw" created
  Normal  IPChanged  7m35s             loadbalancer-controller  IP is now 107.xxx.255.xxx
  Normal  Sync       50s (x6 over 9m)  loadbalancer-controller  Scheduled for sync

@Splaktar
Copy link
Author

Splaktar commented May 9, 2021

Are you using an ingress controller?

I believe that I'm using Container-native load balancing. I'm not using a Shared VPC. My cluster is running 1.18.16-gke.2100.

Networking Status
VPC-native traffic routing Enabled
HTTP Load Balancing Enabled
Network policy Disabled

My access logs on the ingress have

{@type: type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry,
 statusDetails: failed_to_pick_backend}

@Splaktar
Copy link
Author

Splaktar commented May 9, 2021

OK, I got something working finally!

Simlar to #48 (comment) I had to change the following in the Service:

  ports:
  - port: 4873
    protocol: TCP
    targetPort: http

to

  ports:
  - port: 80
    protocol: TCP
    targetPort: 4873

Then in the Ingress, I had to change

spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: npm-verdaccio
          servicePort: 4873
        path: /*
        pathType: ImplementationSpecific

to

spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: npm-verdaccio
          servicePort: 80
        path: /*
        pathType: ImplementationSpecific

Now I'm trying to figure out how to change values.yaml to generate this configuration...

@Splaktar
Copy link
Author

Splaktar commented May 9, 2021

And related to #48 (comment) I tried updating the service to

  ports:
  - port: 80
    protocol: TCP
    targetPort: http

But that didn't work.

@Splaktar
Copy link
Author

Splaktar commented May 9, 2021

It seems like this should be 80

@Splaktar
Copy link
Author

Splaktar commented May 9, 2021

I did another full deployment with the following values.yaml and it succeeded even with leaving the targetPort: http in the service:

image:
  repository: verdaccio/verdaccio
  tag: 5.0.4
  pullPolicy: IfNotPresent
  pullSecrets: [ ]
  # - dockerhub-secret

nameOverride: ""
fullnameOverride: ""

service:
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
  clusterIP: ""

  ## List of IP addresses at which the service is available
  ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
  ##
  externalIPs: [ ]

  loadBalancerIP: ""
  loadBalancerSourceRanges: [ ]
  port: 80
  type: ClusterIP
  # nodePort: 31873

## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: { }

## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
affinity: { }

## Tolerations for nodes
tolerations: [ ]

## Additional pod labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: { }

## Additional pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: { }

replicaCount: 1

resources: { }
  # requests:
  #   cpu: 100m
  #   memory: 512Mi
  # limits:
  #   cpu: 100m
#   memory: 512Mi

ingress:
  enabled: true
  # Set to true if you are on an old cluster where apiVersion extensions/v1beta1 is required
  useExtensionsApi: false
  # This (/*) is due to gce ingress needing a glob where nginx ingress doesn't (/).
  paths:
    - /*
  # Use this to define, ALB ingress's actions annotation based routing. Ex: for ssl-redirect
  # Ref: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/tasks/ssl_redirect/
  extraPaths: [ ]
# hosts:
#   - npm.blah.com
# annotations:
#   kubernetes.io/ingress.class: nginx
# tls:
#   - secretName: secret
#     hosts:
#       - npm.blah.com

## Service account
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: { }
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the Chart's fullname template
  name: ""

# Extra Environment Values - allows yaml definitions
extraEnvVars:
#  - name: VALUE_FROM_SECRET
#    valueFrom:
#      secretKeyRef:
#        name: secret_name
#        key: secret_key
#  - name: REGULAR_VAR
#    value: ABC

# Extra Init Containers - allows yaml definitions
extraInitContainers: [ ]

configMap: |
  # This is the config file used for the docker images.
  # It allows all users to do anything, so don't use it on production systems.
  #
  # Do not configure host and port under `listen` in this file
  # as it will be ignored when using docker.
  # see https://github.com/verdaccio/verdaccio/blob/master/docs/docker.md#docker-and-custom-port-configuration
  #
  # Look here for more config file examples:
  # https://github.com/verdaccio/verdaccio/tree/master/conf
  #

  # path to a directory with all packages
  storage: /verdaccio/storage/data

  web:
    # WebUI is enabled as default, if you want disable it, just uncomment this line
    #enable: false
    title: Verdaccio

  auth:
    htpasswd:
      file: /verdaccio/storage/htpasswd
      # Maximum amount of users allowed to register, defaults to "+infinity".
      # You can set this to -1 to disable registration.
      #max_users: 1000

  # a list of other known repositories we can talk to
  uplinks:
    npmjs:
      url: https://registry.npmjs.org/
      agent_options:
        keepAlive: true
        maxSockets: 40
        maxFreeSockets: 10

  packages:
    '@*/*':
      # scoped packages
      access: $all
      publish: $authenticated
      proxy: npmjs

    '**':
      # allow all users (including non-authenticated users) to read and
      # publish all packages
      #
      # you can specify usernames/groupnames (depending on your auth plugin)
      # and three keywords: "$all", "$anonymous", "$authenticated"
      access: $all

      # allow all known users to publish packages
      # (anyone can register by default, remember?)
      publish: $authenticated

      # if package is not available locally, proxy requests to 'npmjs' registry
      proxy: npmjs

  # To use `npm audit` uncomment the following section
  middlewares:
    audit:
      enabled: true

  # log settings
  logs: {type: stdout, format: pretty, level: http}
  # logs: {type: file, path: verdaccio.log, level: info}

persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires Persistence.Enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  ## Verdaccio data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "standard-rwo"

  accessMode: ReadWriteOnce
  size: 8Gi
  ## selector can be used to match an existing PersistentVolume
  ## selector:
  ##   matchLabels:
  ##     app: my-app
  selector: { }

  volumes:
  #  - name: nothing
  #    emptyDir: {}
  mounts:
  # - mountPath: /var/nothing
  #   name: nothing
  #   readOnly: true

podSecurityContext:
  fsGroup: 101
securityContext:
  runAsUser: 10001

priorityClass:
  enabled: false
  # name: ""

existingConfigMap: false

@todpunk
Copy link
Contributor

todpunk commented May 9, 2021

I haven't tried any of this yet, but as an aside (and for anyone finding this issue historically) I can say that the "http" port name is a misnomer. Kubernetes does not have any meaning for the string in the targetPort. If it's an integer k8s will use that value. If it is a string, it will use the corresponding name on the pod. If it can't find a corresponding name, the service fails to work. So the string "http" is just for human readability, but you can see in the template that it's named:

That's why the name will work.

I'll comment further on what I find in experimenting Monday, of course, and if I find anything I'll PR it.

@Splaktar
Copy link
Author

I tried a few more deployments and I ended up having to manually override the service from

targetPort: http

to

targetPort: 4873

to get things to work consistently.

@Splaktar
Copy link
Author

I'm finding that I have to update targetPort from http to 4873 every single time that I do helm upgrade 😞

@noroutine
Copy link

noroutine commented May 15, 2021

Just hit this outside GKE.

Long story short - hunt in your env where VERDACCIO_PORT is defined by k8s and avoid that

Long story long: when I named my release verdaccio in ArgoCD, the automatic pod variable VERDACCIO_PORT for my service overrides the variable here https://github.com/verdaccio/verdaccio/blob/master/Dockerfile#L57

I think it essentially can be treated as a bug and that variable shall be renamed to avoid clashing with k8s pod env, since i'd assume verdaccio would be quite a common name for releases

EDIT: edited as I just realised your your setup is different, but root cause is essentially the same

@noroutine
Copy link

@Splaktar, i thought at first it maybe because of naming or templating in the chart, but i don't see just this particular chart release generating that variable. I'd rather think it must be defined somewhere else.
Maybe some other verdaccio service defined in same namespace? Or some default env generated and exported by GKE

@Splaktar
Copy link
Author

Splaktar commented May 16, 2021

@noroutine thank you for looking into this. Indeed I do have the following

    spec:
      containers:
      - image: verdaccio/verdaccio:5.0.4
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /-/ping
            port: http
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: verdaccio
        ports:
        - containerPort: 4873
          name: http
          protocol: TCP

Defined in my deployment. It looks like I can't use helm upgrade to update the name:

$ helm upgrade npm -f values-https.yaml verdaccio/verdaccio
Error: UPGRADE FAILED: cannot patch "npm-verdaccio" with kind Deployment: Deployment.apps 
"npm-verdaccio" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"npm", 
"app.kubernetes.io/name":"npm-verdaccio"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: 
field is immutable

So I'll be trying to stand up another cluster to see if this resolves it. I needed to start fresh to try enabling JWT anyhow...

@Splaktar
Copy link
Author

I'm seeing that specifying nameOverride: "npm" and fullnameOverride: "registry-staging" still ends up with
name: verdaccio being set in the deployment.

@todpunk
Copy link
Contributor

todpunk commented May 17, 2021

As an update from my troubleshooting, I used the values.yaml that was simply enabling the ingress, and the command:

helm install npm -f values.yaml verdaccio/verdaccio

I did so against both docker desktop's kubernetes and a minikube install (I don't normally use minikube).

Both worked as expected, and I get the following logs:

thansmann@<hostname>:~$ kubectl logs npm-verdaccio-7b94f46888-t29x5
 warn --- config file  - /verdaccio/conf/config.yaml
 warn --- Plugin successfully loaded: verdaccio-htpasswd
 warn --- Plugin successfully loaded: verdaccio-audit
 warn --- http address - http://0.0.0.0:4873/ - verdaccio/5.0.1
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3
 http --- 10.1.0.1 requested 'GET /-/ping'
 http --- 200, user: null(10.1.0.1), req: 'GET /-/ping', bytes: 0/3

Kubernetes version 1.19

I would suggest adding a --namespace npmregistry or something to isolated it in your GKE cluster and see if that affects things. Named ports have weird scoping rules that escape me, but it seems this would be environment specific at this point to either GKE or to your specific setup in other things installed or configured at the time. I know GKE comes with a number of deployments already for things like stackdriver and such, so maybe something interacting with that in the default namespace?

@Splaktar
Copy link
Author

OK, thanks, I'll take a look at using a namespace, it's a new concept to me. I'm waiting for it to come up now in a namespace and with both nameOverride and fullnameOverride defined.

@Splaktar
Copy link
Author

It still failed in a namespace and I had to set targetPort to 4873 to get it working. I just don't think that GKE will resolve targetPort: http and it needs the actual port number.

In the following docs, it only shows targetPort as an integer and mentions it being a port number. It links to the K8s spec that says that it can be a named containerPort, but I don't think that GKE supports that.

@himanshumehta1114
Copy link

@Splaktar @todpunk @juanpicado @noroutine

I am also facing the same issue. I am using kubernetes server version v1.14.10-gke.40. With helm chart v4.0.0 and verdaccio image v4.10.0.

Error log:

 warn --- config file  - /verdaccio/conf/config.yaml
 warn --- Verdaccio started
 warn --- Plugin successfully loaded: verdaccio-htpasswd
 warn --- Plugin successfully loaded: verdaccio-audit
 warn --- invalid address - http://0.0.0.0:tcp://xx.xx.xx.xx:4873, we expect a port (e.g. "4873"), host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

I tried with following configuration:

targetPort: 4873
port: 80

It updated the error message with:

 warn --- invalid address - http://0.0.0.0:tcp://xx.xx.xx.xx:80, we expect a port (e.g. "4873"), host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

Still looking for a solution. No idea what is going on.

@Splaktar
Copy link
Author

@himanshumehta1114 the chart doesn't support specifying targetPort in the values.yaml.

I ended up forking the chart in order to add the needed GKE support, I tried to list the changes in the releases here:
https://github.com/xlts-dev/verdaccio-gke-charts/releases

However, it's only been tested with Verdaccio v5.

@himanshumehta1114
Copy link

himanshumehta1114 commented May 26, 2021

@Splaktar Thanks for sharing the chart. Using this chart, verdaccio deployed smoothly but still, I wasn't able to access it. I had to update the ingress config as following to make it work.

ingress:
  enabled: true
  paths:
    - /
  annotations:
    kubernetes.io/ingress.class: contour

Reason being, the chart adds

paths:
  - /*

I'm using contour ingress controller & it doesn't support regular expression in path. Therefore /* was not routing correctly.

Source: https://github.com/xlts-dev/verdaccio-gke-charts/blob/verdaccio-gke-charts-0.0.4/charts/verdaccio/values.yaml#L64

@Splaktar
Copy link
Author

@himanshumehta1114 good to know! Thankfully those are both things that you can define in your own copy of the values.yaml. That's great that it only needed those two tweaks though. I've never heard of contour yet, but I'll keep my eyes open for more info on it. I've added some comments in the values.yaml about other ingress controllers.

@alfieyfc
Copy link

@Splaktar @todpunk @juanpicado @noroutine

I am also facing the same issue. I am using kubernetes server version v1.14.10-gke.40. With helm chart v4.0.0 and verdaccio image v4.10.0.

Error log:

 warn --- config file  - /verdaccio/conf/config.yaml
 warn --- Verdaccio started
 warn --- Plugin successfully loaded: verdaccio-htpasswd
 warn --- Plugin successfully loaded: verdaccio-audit
 warn --- invalid address - http://0.0.0.0:tcp://xx.xx.xx.xx:4873, we expect a port (e.g. "4873"), host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

I tried with following configuration:

targetPort: 4873
port: 80

It updated the error message with:

 warn --- invalid address - http://0.0.0.0:tcp://xx.xx.xx.xx:80, we expect a port (e.g. "4873"), host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

Still looking for a solution. No idea what is going on.

@himanshumehta1114 I ran into this same issue. Did you use verdaccio for the release name? That's what I did when it didn't work. Seems like the environment variable VERDACCIO_PORT is overridden by Kubernetes, which was supposedly 4873 but set to tcp://10.0.45.43:4873.

Using a different release name should avoid that. Another workaround is manually alter metadata.name for the service.yaml template. Maybe a few conditioning for the service name would be the right solutions.

I used this:

name: {{ template "verdaccio.fullname" . }}-svc

This is probably unrelated to the original ingress issue though.

@alfieyfc
Copy link

alfieyfc commented Jul 20, 2021

@himanshumehta1114 the chart doesn't support specifying targetPort in the values.yaml.

I ended up forking the chart in order to add the needed GKE support, I tried to list the changes in the releases here:
https://github.com/xlts-dev/verdaccio-gke-charts/releases

However, it's only been tested with Verdaccio v5.

I hadn't tried out the verdaccio-gke-charts fork because I forked my own before reading this thread. But can confirm that on GKE setting the targetPort field for service.yaml to port name http would recreate the OP's screenshot.

image

I just changed this line as below and the issue was resolved.

targetPort: {{ .Values.service.port }}

Not sure if this is GKE / K8s version specific.
Not sure, either, of the original purpose to use port name here, but the proposed method of setting targetPort in values.yaml would work too.


ref: #48 (comment)

@juanpicado
Copy link
Member

@Splaktar Since you have https://github.com/xlts-dev/verdaccio-gke-charts pretty well maintainer, should we linked it from the README so users are aware of? I guess you understand deeply the differences.

@Splaktar
Copy link
Author

I think that should be okay. We just added support for configuring the horizontal pod autoscaler today. We also added the ability to customize the liveness and readiness probes.

We forked the google-cloud-storage plug in, but that is not yet published. Hopefully in November, we will be able to get that published and make it the default for our GKE chart to enable a good autoscaling configuration out of the box.

We'll also look at some ways to contribute upstream once we get past some deadlines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants