Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/redis] hapropxy frontend for redis /sentinel #5424

Closed
viceice opened this issue Feb 9, 2021 · 30 comments
Closed

[bitnami/redis] hapropxy frontend for redis /sentinel #5424

viceice opened this issue Feb 9, 2021 · 30 comments
Assignees
Labels
redis triage Triage is needed
Projects

Comments

@viceice
Copy link
Contributor

viceice commented Feb 9, 2021

Which chart:
redis v12.7.4

Is your feature request related to a problem? Please describe.
There are a lot of software which can use redis as cache but are not able to use sentinel, so it would be nice if i could enable hapropxy as frondend. so all non sentinel aware software (like docker registry) can use a ha redis.

Describe the solution you'd like
Add a flag to enable a harpxy deployment and service as frontend to the redis master with automatic failover

Describe alternatives you've considered
Manually yconfigure the haproxy

Additional context
Configure haproxy for redis with sentinel:

@viceice viceice changed the title [redis] allow hapropxy frontend for redis with sentinel [redis] hapropxy frontend for redis /sentinel Feb 9, 2021
@rafariossaa
Copy link
Contributor

Hi,
Could you provide more background on the use case ? . I am not very sure of the need of HAProxy (or other proxy/load balancer) as you can use the service IP as a LB.

@viceice
Copy link
Contributor Author

viceice commented Feb 9, 2021

Does the service IP always point to the current redis master?

@viceice
Copy link
Contributor Author

viceice commented Feb 9, 2021

I assume yes 🙃

- /health/ping_liveness_local_and_master.sh {{ .Values.slave.livenessProbe.timeoutSeconds }}

ping_readiness_local_and_master.sh: |-

@viceice
Copy link
Contributor Author

viceice commented Feb 9, 2021

Thinking master-slave-with-sentinel is describing something different.

So i need a ways to have a service always pointing to the master (auto failover), as my application is not sentinel aware. So my idea was to use haproxy (known to work with sentinel) as proxy.

distribution/distribution#2885 distribution/distribution#2573

@rafariossaa
Copy link
Contributor

Hi,
The master host in pinged here, and REDIS_MASTER_HOST is set test here.
Would that work for you ?

@viceice
Copy link
Contributor Author

viceice commented Feb 10, 2021

No, the master-slave mode is not a solution form me, as the master would be the single point of failure.

So my solution would be a service, which always points to the sentinel elected master node. So i though fronting the redis nodes with haproxy can work.

Maybe overriding the readiness probe is another solution, so let it fail if current node don't have master role

@viceice
Copy link
Contributor Author

viceice commented Feb 10, 2021

slave:
  persistence:
    enabled: false
    #storageClass: longhorn
    requests:
      memory: 64Mi
    limits:
      memory: 256Mi
  shareProcessNamespace: true
  readinessProbe:
    enabled: false
  customReadinessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 3
    exec:
      command:
        - 'bash'
        - '-c'
        - '/health/ping_readiness_local.sh 1 && export REDISCLI_AUTH=$(cat ${REDIS_PASSWORD_FILE}) && [ "$(timeout -s 3 1 redis-cli -h localhost info replication | grep -o "role:master")" == "role:master" ] || exit 1'

did not work, because helm deployment will timeout 😕 so only solution is to put haproxy in front

@viceice
Copy link
Contributor Author

viceice commented Feb 10, 2021

My workaround is to use the chart from DandyDeveloper/redis-ha.

I like the bitnami charts 🙃

@rafariossaa
Copy link
Contributor

Hi,
Thank you for placing your trust in us.
We will take a look to that chart and see how we can improve ours.
Could you share which settings are you using when deploying the chart ?

@viceice
Copy link
Contributor Author

viceice commented Feb 11, 2021

values.yaml for redis-ha

auth: true
authKey: password

# not supported on single node cluster
hardAntiAffinity: false

image:
  repository: redis
  tag: 6.0.10-alpine

redis:
  resources:
    requests:
      memory: 64Mi
    limits:
      memory: 256Mi

sentinel:
  auth: true
  authKey: password
  resources: {}

haproxy:
  enabled: true
  hardAntiAffinity: false # see above
  image:
    repository: haproxy
    tag: 2.2.9-alpine
  resources: {}

persistentVolume:
  enabled: false

deploying with terraform

resource "helm_release" "redis" {
  name       = "redis"
  namespace  = local.ns
  repository = "https://dandydeveloper.github.io/charts"
  chart      = "redis-ha"
  version    = "4.12.1"

  values = [
    file("${path.module}/redis/values.yaml")
  ]

  set {
    name  = "existingSecret"
    value = kubernetes_secret.redis.metadata.0.name
  }
  set {
    name  = "sentinel.existingSecret"
    value = kubernetes_secret.redis.metadata.0.name
  }
}

@carrodher carrodher added the on-hold Issues or Pull Requests with this label will never be considered stale label Feb 12, 2021
@carrodher
Copy link
Member

Thanks for the info, we already created an internal task to check if there is any missing feature in bitnami/redis-ha Helm Chart that can be added. I just added the on-hold label to prevent stale bot from closing this issue while we work on the task.

@maxisam
Copy link
Contributor

maxisam commented Mar 15, 2021

This solution is actually on VMWare as well. Feel like you guys should know this already :p j/k

https://community.pivotal.io/s/article/How-to-setup-HAProxy-and-Redis-Sentinel-for-automatic-failover-between-Redis-Master-and-Slave-servers?language=en_US

@rafariossaa
Copy link
Contributor

Hi @maxisam ,
Thanks for providing the link, I am adding it to our internal task.

@badihi
Copy link

badihi commented Mar 25, 2021

I was using the chart for master/slave setup today, and was wondering as there is no HA because the main instance is the single point of failure, what would be the purpose of slave replicas?

@rafariossaa
Copy link
Contributor

It is one of the possible way of deploying this, if you need HA I would suggest to add sentinel to the mix. That element will handle the HA of the cluster.

@viceice
Copy link
Contributor Author

viceice commented Mar 26, 2021

@rafariossaa what if the app doesn't support sentinel? Like in my requirement. I don't have the control over that. I just want to deploy two apps in ha and one needs redis.

@carrodher
Copy link
Member

Hi all,

after comparing both charts (DandyDeveloper/redis-ha, former stable/redis-ha) with the Bitnami ones, the main difference is that bitnami/redis (and/or bitnami/redis-cluster) doesn’t add HAProxy support.

In the short-term we don’t have plans to add non-officially supported ways of proxying requests or clustering to the Bitnami Redis Helm Chart, relying for this purpose on the alternatives provided by Redis itself. The addition of HAProxy into the Bitnami Redis Helm Chart (templates, k8s manifest, container image, etc) will increase the complexity. It is possible to use a regular k8s load balancer or sentinel as a single entry point and it will continue making the failover properly.

Screenshot 2021-03-29 at 09 06 14

Image source

On the other hand, we have on our backlog a task for the addition of HAProxy to the Bitnami Helm Charts catalog. In that case, once released this new Helm Chart (no ETA at this moment), we can evaluate the addition of the future bitnami/haproxy Helm Chart as a subchart in bitnami/redis (and/or bitnami/redis-cluster) or provide the instructions to allow users to deploy and configure it along with Redis to cover those specific use cases.

@guru1306
Copy link

guru1306 commented Jul 28, 2021

With this configuration, I am able to use bitnami haproxy to redirect writes/reads to master.
Here is a blog explaining the approach

Posting this in case if it helps someone

 replicaCount: 2
 containerPorts:
  - name: http
    containerPort: 8080
  - name: redis
    containerPort: 6379
 service:  
  type: ClusterIP  
  ports:
    - name: tcp-http
      protocol: TCP
      port: 80
      targetPort: http  
    - name: tcp-redis
      protocol: TCP
      port: 6379
      targetPort: redis 
 configuration: |  
    global
      log stdout format raw local0
      maxconn 1024   

    resolvers mydns
      parse-resolv-conf
      hold valid 10s

    defaults REDIS
      default-server init-addr libc,none
      log global
      mode tcp
      timeout connect 4s
      timeout server 330s
      timeout client 330s
      timeout check 2s
      option redispatch
      retries 300

    # decide redis backend to use
    #master
    frontend ft_redis_master
      bind *:6379
      use_backend bk_redis_master

    # Check all redis servers to see if they think they are master
    backend bk_redis_master
      mode tcp
      option tcp-check
      tcp-check connect
      tcp-check send PING\r\n
      tcp-check expect string +PONG
      tcp-check send info\ replication\r\n
      tcp-check expect string role:master
      tcp-check send QUIT\r\n
      tcp-check expect string +OK      
      server R0 demo-redis-node-0.demo-redis-headless.<your namespace>.svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1      
      server R1 demo-redis-node-1.demo-redis-headless.<your namespace>.svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1      
      server R2 demo-redis-node-2.demo-redis-headless.<your namespace>svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1

@rafariossaa
Copy link
Contributor

Hi,
Thanks for coming back and sharing your configuration.

@viceice
Copy link
Contributor Author

viceice commented Aug 6, 2021

Here my solution (uses terraform)

haproxy values

replicaCount: 2

containerPorts:
  - name: redis
    containerPort: 6379

service:
  type: ClusterIP
  ports:
    - name: tcp-redis
      protocol: TCP
      port: 6379
      targetPort: redis

configuration: |
  resolvers mydns
    parse-resolv-conf
    resolve_retries     100
    hold other           5s
    hold refused         5s
    hold nx              5s
    hold timeout         5s
    hold valid           5s
    hold obsolete        5s

  defaults REDIS
    default-server init-addr libc,none
    mode tcp
    timeout connect 4s
    timeout server 15s
    timeout client 15s
    timeout check 2s
    option redispatch

  # decide redis backend to use
  #master
  frontend ft_redis_master
    bind *:6379
    use_backend bk_redis_master

  # Check all redis servers to see if they think they are master
  backend bk_redis_master
    mode tcp
    option tcp-check
    tcp-check connect
    tcp-check send "AUTH $${AUTH}"\r\n
    tcp-check expect string +OK
    tcp-check send PING\r\n
    tcp-check expect string +PONG
    tcp-check send info\ replication\r\n
    tcp-check expect string role:master
    tcp-check send QUIT\r\n
    tcp-check expect string +OK
  %{ for idx in range(redis.replica) ~}
  server R${idx} ${redis.name}-node-${idx}.${redis.name}-headless:6379 resolvers mydns check inter 1s fall 1 rise 1
  %{ endfor ~}

extraEnvVars:
  - name: AUTH
    valueFrom:
      secretKeyRef:
        name: ${redis.secret}
        key: redis-password

redis values

## redis (https://charts.bitnami.com/bitnami)
image:
  registry: gcr.io
  repository: bitnami-containers/redis

architecture: replication

master:
  persistence:
    enabled: false
#     #storageClass: longhorn
#   resources:
#     requests:
#       memory: 64Mi
#     limits:
#       memory: 256Mi

replica:
  persistence:
    enabled: false
    #storageClass: longhorn
  resources:
    requests:
      memory: 64Mi
    limits:
      memory: 256Mi

sentinel:
   enabled: true
   image:
      registry: gcr.io
      repository: bitnami-containers/redis-sentinel

terraform

resource "helm_release" "redis" {
  name      = "redis"
  namespace = local.ns

  repository = "https://charts.bitnami.com/bitnami"
  chart      = "redis"
  version    = "14.8.8"

  max_history = 10

  values = [
    file("${path.module}/redis/values.yaml")
  ]

  // update haproxy redisReplica
  set {
    name  = "replica.replicaCount"
    value = "3"
  }

  set {
    name  = "auth.existingSecret"
    value = kubernetes_secret.redis.metadata.0.name
  }

}

resource "kubernetes_secret" "redis" {
  metadata {
    name      = "redis-psw"
    namespace = local.ns
  }
  data = {
    "redis-password" = local.redis_password
  }
}
resource "helm_release" "haproxy" {
  name      = "haproxy"
  namespace = local.ns

  repository = "https://charts.bitnami.com/bitnami"
  chart      = "haproxy"
  version    = "0.2.2"

  max_history = 10

  values = [
    templatefile("${path.module}/haproxy/values.yaml", {
      redis = {
        replica = 3
        name    = helm_release.redis.name
        secret  = kubernetes_secret.redis.metadata.0.name
      }
    })
  ]
}

Edit: resolvers are needed to update ip's
Edit: some small settings added for better dns timeouts

@juan131
Copy link
Contributor

juan131 commented Aug 9, 2021

Wow! Thanks so much @viceice !! That's awesome, I'm sure that would be extremely helpful for other users facing the challenge to expose Redis.

@viceice
Copy link
Contributor Author

viceice commented Aug 9, 2021

One bad issues is, if i only update configuration the haproxy pods need to manually redeployed, as haproxy doesn't recogize the config changes.

Any solution for that?

@juan131
Copy link
Contributor

juan131 commented Aug 10, 2021

Yes @viceice, there's a solution for that which consists on adding an annotation that contains the checksum of the Configmap. See what we do in Redis:

Every time you modify the ConfigMap (via helm), the annotation will change and that will force a rolling update in the HAProxy pods.

@viceice
Copy link
Contributor Author

viceice commented Aug 10, 2021

Ok, but that annotation is missing on bitnami haproxy chart.

@juan131
Copy link
Contributor

juan131 commented Aug 10, 2021

It looks like a perfect time to add it!! Sending a PR!!

@SCLogo
Copy link

SCLogo commented Nov 3, 2021

when you use haproxy. don't you get sporadic 500 ? we use redis with haproxy behind envoy ratelimit.

@zffocussss
Copy link

zffocussss commented May 16, 2022

is nginx/openresty alternative to haproxy?as nginx can be reloaded without downtime

@carrodher
Copy link
Member

Unfortunately, this issue was created a long time ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team.

Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute.

During this time, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it.

@bitnami-bot bitnami-bot added this to Solved in Support Oct 20, 2022
@github-actions github-actions bot added solved and removed on-hold Issues or Pull Requests with this label will never be considered stale labels Oct 20, 2022
@github-actions github-actions bot moved this from Solved to Pending in Support Oct 20, 2022
@carrodher carrodher moved this from Pending to Solved in Support Oct 20, 2022
@viceice
Copy link
Contributor Author

viceice commented Oct 20, 2022

this isn't fixed / done, but i do no longer need it.

i switched to keydb, where this isn't required.

@fmulero fmulero removed this from Solved in Support Jan 18, 2023
@life5ign
Copy link

life5ign commented Nov 3, 2023

The vmware/oracle link is broken; here is the new one: https://blogs.oracle.com/cloud-infrastructure/post/deploying-highly-available-redis-replication-with-haproxy-on-oracle-cloud-infrastructure

@bitnami-bot bitnami-bot added this to Triage in Support Nov 3, 2023
@github-actions github-actions bot added triage Triage is needed and removed solved labels Nov 3, 2023
@javsalgar javsalgar changed the title [redis] hapropxy frontend for redis /sentinel [bitnami/redis] hapropxy frontend for redis /sentinel Nov 6, 2023
@javsalgar javsalgar added the redis label Nov 6, 2023
@javsalgar javsalgar moved this from Triage to Pending in Support Nov 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
redis triage Triage is needed
Projects
No open projects
Support
Pending
Development

No branches or pull requests