Unable to access SCTP Server using NodePort from host when running cilium without kube-proxy #32412
Open
2 of 3 tasks
Labels
area/loadbalancing
Impacts load-balancing and Kubernetes service implementations
kind/community-report
This was reported by a user in the Cilium community, eg via Slack.
kind/question
Frequently asked questions & answers. This issue will be linked from the documentation's FAQ.
needs/triage
This issue requires triaging to establish severity and next steps.
sig/datapath
Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Is there an existing issue for this?
What happened?
In kubeadm based k8s cluster, I am running cilium as the primary CNI without kube-proxy. I have deployed the sample sctp-server as deployment and exposed the NodePort service just as mentioned in https://isovalent.com/labs/cilium-sctp/.
When I try to access the sctp server from host using ncat with NodeIP and NodePort, I get connection refused. It doesn't work with service IP too. It works when I initiate ncat with sctp service-name/podIP from inside another pod.
I also tried capturing pcap and see INIT/ABORT messages for SCTP protocol, where Verification tag is non-zero in ABORT message.
When I try the same experiment with the same cilium, k8s, kernel versions but with kube-proxy enabled, I see that sctp connection is established from host with NodeIP and NodePort. Can you please let me know what am I missing here? Is SCTP access via NodePort without kube-proxy is not supported yet or I need to enable any other config?
cilium-config.txt
Cilium Version
cilium-cli: v0.16.6 compiled with go1.22.2 on linux/amd64
cilium image (default): v1.15.4
Kernel Version
5.4.0-174-generic
Kubernetes Version
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.3
Regression
No response
Sysdump
No response
Relevant log output
No response
Anything else?
No response
Cilium Users Document
Code of Conduct
The text was updated successfully, but these errors were encountered: