Network ingress traffic from always SNAT'd in Kubernetes clusters with canal network provider

Table of Contents


Network ingress traffic to a Kubernetes cluster with the canal network provider, from IP addresses in the range, is always SNAT'd, even in instances where this is not desired.

For example on NodePort services configured with externalTrafficPolicy: Local the source IP should be preserved without SNAT, per the Kubernetes documentation. With this issue the source IP is SNAT'd even in instances of NodePort services configured with externalTrafficPolicy: Local.


  • A Kubernetes cluster provisioned via the RKE CLI or Rancher, using the canal network provider

Root cause

When a cluster is provisioned with the canal network provider selected, Flannel is used for networking and Calico for network policy enforcement, and IP address management is therefore managed by Flannel.

The calico-node container in the canal pod is still configured with an (un-used) IP pool, which defaults to By default Calico programs iptables rules in the cali-nat-outgoing chain of the nat table on cluster nodes to perform SNAT on traffic from this IP pool. The purpose of these rules is to masquerade egress traffic from pods where Calico is used for networking (and not just network policy). As a result in a canal network provider cluster, where the calico-node container is present for network policy enforcement, these rules are programed and any ingress traffic from the range will match and be SNAT'd.


The permanent solution to prevent this issue is to update the RKE deployment templates for the canal daemonset, to set the environment variable CALICO_IPV4POOL_NAT_OUTGOING to 0 for the calico-node container. This will prevent programming of the problematic cali-nat-outgoing iptables rules and is tracked in Rancher Issue #20500.

In order to workaround the issue in existing clusters, the Calico ippool configuration can be edited to disable outgoing nat, which removes programming of the cali-nat-outgoing iptables rules. To implement this workaround run kubectl against the affected to edit the default-ipv4-ippool object: kubectl edit ippools default-ipv4-ippool. Edit the line natOutgoing: true to set natOutgoing: false and save the change. Calico will detect the configuration update and remove the cali-nat-outgoing iptables rules.

Was this article helpful?
0 out of 0 found this helpful



Please sign in to leave a comment.