In a Kubernetes cluster, running on AWS EC2 instances, with the AWS cloudprovider configured, labels and taints for a node are reset when the EC2 instance is rebooted.
- Kubernetes version lower than v1.12.0
- Cluster running on AWS EC2 instances, with the AWS cloudprovider configured
This behaviour is caused by the AWS cloudprovider in Kubernetes versions prior to v1.12.0, in which a stopped EC2 instance is deleted from the Kubernetes cluster, and then re-created when started again. As a result of this deletion and re-creation labels and taints on the node are lost during the reboot. Details of the issue and fix can be found in Kubernetes Pull Request #66835.
In order to resolve this issue, the cluster should be upgraded to Kubernetes version v1.12.0 or above.