In Rancher v2.x, prior to v2.3, if a node is deleted via Kubernetes, rather than Rancher itself - i.e. via
kubectl delete node or another process connecting to the Kubernetes API, such as the use of the cluster-autoscaler - the node will be removed from the Kubernetes cluster, but still be present according to Rancher, remaining in an 'unavailable' state.
Kubernetes scheduling and workloads will perform as expected for the removal of the node, as the node is correctly removed from the Kubernetes cluster. However, the view in Rancher will continue to show the node as 'unavailable' until it is manually deleted from within Rancher too.
- A Rancher v2.x provisioned Kubernetes cluster, prior to v2.3, using either custom nodes or nodes hosted in an infrastructure provider.
To remove nodes, in a Rancher v2.x provisioned cluster, that have been deleted in Kubernetes, and are no longer present in the output of
kubectl get nodes, but remain in Rancher in an 'unavailable' state, you can delete these from within the node list for the cluster within the Rancher UI.
This was tracked in Rancher GitHub issue #14184 and has been resolved since the release of Rancher v2.3. Where a node is deleted via Kubernetes, in Rancher v2.3 and above, this is detected by Rancher and the cluster is reconciled by Rancher to reflect the removal.