Node labels and taints reset on reboot with AWS cloudprovider in Kubernetes lower than v1.12.0

Follow
Table of Contents

Issue

In a Kubernetes cluster, running on AWS EC2 instances, with the AWS cloudprovider configured, labels and taints for a node are reset when the EC2 instance is rebooted.

Pre-requisites

  • Kubernetes version lower than v1.12.0
  • Cluster running on AWS EC2 instances, with the AWS cloudprovider configured

Root Cause

This behaviour is caused by the AWS cloudprovider in Kubernetes versions prior to v1.12.0, in which a stopped EC2 instance is deleted from the Kubernetes cluster, and then re-created when started again. As a result of this deletion and re-creation labels and taints on the node are lost during the reboot. Details of the issue and fix can be found in Kubernetes Pull Request #66835.

Resolution

In order to resolve this issue, the cluster should be upgraded to Kubernetes version v1.12.0 or above.

For clusters provisioned via the RKE CLI, users can upgrade the cluster to a Kubernetes version of v1.12.0 or higher with RKE v0.1.10 or above.

For clusters provisioned via Rancher, users can upgrade the cluster to a Kubernetes version of v1.12.6 or higher with Rancher v2.1.7 or above.

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.