New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/redis] hapropxy frontend for redis /sentinel #5424
Comments
Hi, |
Does the service IP always point to the current redis master? |
I assume yes 🙃
|
Thinking master-slave-with-sentinel is describing something different. So i need a ways to have a service always pointing to the master (auto failover), as my application is not sentinel aware. So my idea was to use haproxy (known to work with sentinel) as proxy. distribution/distribution#2885 distribution/distribution#2573 |
No, the master-slave mode is not a solution form me, as the master would be the single point of failure. So my solution would be a service, which always points to the sentinel elected master node. So i though fronting the redis nodes with haproxy can work. Maybe overriding the readiness probe is another solution, so let it fail if current node don't have master role |
slave:
persistence:
enabled: false
#storageClass: longhorn
requests:
memory: 64Mi
limits:
memory: 256Mi
shareProcessNamespace: true
readinessProbe:
enabled: false
customReadinessProbe:
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
exec:
command:
- 'bash'
- '-c'
- '/health/ping_readiness_local.sh 1 && export REDISCLI_AUTH=$(cat ${REDIS_PASSWORD_FILE}) && [ "$(timeout -s 3 1 redis-cli -h localhost info replication | grep -o "role:master")" == "role:master" ] || exit 1' did not work, because helm deployment will timeout 😕 so only solution is to put haproxy in front |
My workaround is to use the chart from DandyDeveloper/redis-ha. I like the bitnami charts 🙃 |
Hi, |
values.yaml for redis-ha auth: true
authKey: password
# not supported on single node cluster
hardAntiAffinity: false
image:
repository: redis
tag: 6.0.10-alpine
redis:
resources:
requests:
memory: 64Mi
limits:
memory: 256Mi
sentinel:
auth: true
authKey: password
resources: {}
haproxy:
enabled: true
hardAntiAffinity: false # see above
image:
repository: haproxy
tag: 2.2.9-alpine
resources: {}
persistentVolume:
enabled: false deploying with terraform resource "helm_release" "redis" {
name = "redis"
namespace = local.ns
repository = "https://dandydeveloper.github.io/charts"
chart = "redis-ha"
version = "4.12.1"
values = [
file("${path.module}/redis/values.yaml")
]
set {
name = "existingSecret"
value = kubernetes_secret.redis.metadata.0.name
}
set {
name = "sentinel.existingSecret"
value = kubernetes_secret.redis.metadata.0.name
}
} |
Thanks for the info, we already created an internal task to check if there is any missing feature in bitnami/redis-ha Helm Chart that can be added. I just added the |
This solution is actually on VMWare as well. Feel like you guys should know this already :p j/k |
Hi @maxisam , |
I was using the chart for master/slave setup today, and was wondering as there is no HA because the main instance is the single point of failure, what would be the purpose of slave replicas? |
It is one of the possible way of deploying this, if you need HA I would suggest to add sentinel to the mix. That element will handle the HA of the cluster. |
@rafariossaa what if the app doesn't support sentinel? Like in my requirement. I don't have the control over that. I just want to deploy two apps in ha and one needs redis. |
Hi all, after comparing both charts (DandyDeveloper/redis-ha, former In the short-term we don’t have plans to add non-officially supported ways of proxying requests or clustering to the Bitnami Redis Helm Chart, relying for this purpose on the alternatives provided by Redis itself. The addition of HAProxy into the Bitnami Redis Helm Chart (templates, k8s manifest, container image, etc) will increase the complexity. It is possible to use a regular k8s load balancer or sentinel as a single entry point and it will continue making the failover properly. On the other hand, we have on our backlog a task for the addition of HAProxy to the Bitnami Helm Charts catalog. In that case, once released this new Helm Chart (no ETA at this moment), we can evaluate the addition of the future |
With this configuration, I am able to use bitnami haproxy to redirect writes/reads to master. Posting this in case if it helps someone
|
Hi, |
Here my solution (uses terraform) haproxy values replicaCount: 2
containerPorts:
- name: redis
containerPort: 6379
service:
type: ClusterIP
ports:
- name: tcp-redis
protocol: TCP
port: 6379
targetPort: redis
configuration: |
resolvers mydns
parse-resolv-conf
resolve_retries 100
hold other 5s
hold refused 5s
hold nx 5s
hold timeout 5s
hold valid 5s
hold obsolete 5s
defaults REDIS
default-server init-addr libc,none
mode tcp
timeout connect 4s
timeout server 15s
timeout client 15s
timeout check 2s
option redispatch
# decide redis backend to use
#master
frontend ft_redis_master
bind *:6379
use_backend bk_redis_master
# Check all redis servers to see if they think they are master
backend bk_redis_master
mode tcp
option tcp-check
tcp-check connect
tcp-check send "AUTH $${AUTH}"\r\n
tcp-check expect string +OK
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
%{ for idx in range(redis.replica) ~}
server R${idx} ${redis.name}-node-${idx}.${redis.name}-headless:6379 resolvers mydns check inter 1s fall 1 rise 1
%{ endfor ~}
extraEnvVars:
- name: AUTH
valueFrom:
secretKeyRef:
name: ${redis.secret}
key: redis-password redis values ## redis (https://charts.bitnami.com/bitnami)
image:
registry: gcr.io
repository: bitnami-containers/redis
architecture: replication
master:
persistence:
enabled: false
# #storageClass: longhorn
# resources:
# requests:
# memory: 64Mi
# limits:
# memory: 256Mi
replica:
persistence:
enabled: false
#storageClass: longhorn
resources:
requests:
memory: 64Mi
limits:
memory: 256Mi
sentinel:
enabled: true
image:
registry: gcr.io
repository: bitnami-containers/redis-sentinel terraform resource "helm_release" "redis" {
name = "redis"
namespace = local.ns
repository = "https://charts.bitnami.com/bitnami"
chart = "redis"
version = "14.8.8"
max_history = 10
values = [
file("${path.module}/redis/values.yaml")
]
// update haproxy redisReplica
set {
name = "replica.replicaCount"
value = "3"
}
set {
name = "auth.existingSecret"
value = kubernetes_secret.redis.metadata.0.name
}
}
resource "kubernetes_secret" "redis" {
metadata {
name = "redis-psw"
namespace = local.ns
}
data = {
"redis-password" = local.redis_password
}
}
resource "helm_release" "haproxy" {
name = "haproxy"
namespace = local.ns
repository = "https://charts.bitnami.com/bitnami"
chart = "haproxy"
version = "0.2.2"
max_history = 10
values = [
templatefile("${path.module}/haproxy/values.yaml", {
redis = {
replica = 3
name = helm_release.redis.name
secret = kubernetes_secret.redis.metadata.0.name
}
})
]
} Edit: resolvers are needed to update ip's |
Wow! Thanks so much @viceice !! That's awesome, I'm sure that would be extremely helpful for other users facing the challenge to expose Redis. |
One bad issues is, if i only update Any solution for that? |
Yes @viceice, there's a solution for that which consists on adding an annotation that contains the checksum of the Configmap. See what we do in Redis: Every time you modify the ConfigMap (via helm), the annotation will change and that will force a rolling update in the HAProxy pods. |
Ok, but that annotation is missing on bitnami haproxy chart. |
It looks like a perfect time to add it!! Sending a PR!! |
when you use haproxy. don't you get sporadic 500 ? we use redis with haproxy behind envoy ratelimit. |
is nginx/openresty alternative to haproxy?as nginx can be reloaded without downtime |
Unfortunately, this issue was created a long time ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team. Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute. During this time, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it. |
this isn't fixed / done, but i do no longer need it. i switched to keydb, where this isn't required. |
The vmware/oracle link is broken; here is the new one: https://blogs.oracle.com/cloud-infrastructure/post/deploying-highly-available-redis-replication-with-haproxy-on-oracle-cloud-infrastructure |
Which chart:
redis v12.7.4
Is your feature request related to a problem? Please describe.
There are a lot of software which can use redis as cache but are not able to use sentinel, so it would be nice if i could enable hapropxy as frondend. so all non sentinel aware software (like docker registry) can use a ha redis.
Describe the solution you'd like
Add a flag to enable a harpxy deployment and service as frontend to the redis master with automatic failover
Describe alternatives you've considered
Manually yconfigure the haproxy
Additional context
Configure haproxy for redis with sentinel:
The text was updated successfully, but these errors were encountered: