How to migrate from CentOS packaged to upstream Docker

Follow
Table of Contents

Task

This article will describe the process by which you can migrate a CentOS node in a Rancher cluster from running the CentOS packaged Docker package to the upstream package from Docker.

In order to perform this migration you will be required to first uninstall the CentOS packaged Docker, before installing the upstream version. This process is destructive, and will remove all container state from the host. As a result the process outlined below, will guide you through first removing the node from the Rancher cluster, before conducting the package migration, then finally re-adding the node to the cluster.

Pre-requisites

  • A Kubernetes cluster launched with the Rancher Kubernetes Engine (RKE) CLI, v0.1.x or v0.2.x, or a Rancher v2.x launched Kubernetes cluster on custom nodes
  • Nodes running CentOS 7.x, with Docker installed from the CentOS extras repository.

Resolution

Cluster launched by the RKE CLI

Create a Backup

As with any cluster maintenance, it is recommended that you first take an etcd snapshot of the cluster, to recover from in the event of an issue. A snapshot can be created for the cluster, per the RKE documentation here and you should copy the snapshot off an etcd node to a safe location outside the cluster.

Perform migration on each cluster node in turn

  1. Check if you should first add an additional node to the cluster, to replace the node during its migration:

    Controlplane or etcd nodes In the case that the node is a controlplane or etcd node, it is recommend that you first add an additional node to replace this, or add the role(s) to an existing node, to ensure that quorum is maintained in the event of failure of another node during the process. If the node is the single etcd or controlplane node in the cluster, then adding an additional node to replace it is not an optional step. Add the new etcd and/or controlplane role node to the cluster configuration YAML and run rke up to provision this.

    Worker nodes If the worker nodes within the cluster are heavily loaded, or if the node is the sole worker role node, you should provision an additional worker node, to replace the node during the migration. Add the new worker role node to the cluster configuration YAML and run rke up to provision this.

  2. Remove the node which you are migrating from the cluster, to do so remove the node from the cluster configuration YAML and then run rke up to reconcile the cluster.

  3. Once the rke up invocation in step 2. completes successfully, run the Extended Rancher 2 cleanup script on the node that you are migrating, to clean up Rancher state.
  4. Switch to the upstream Docker package on the node, by following the Docker Engine installation documentation for CentOS from the section Uninstall old versions here.
  5. Add the node back to the cluster configuration YAML and run rke up to provision it.

Custom cluster launched by Rancher

Create a Backup

As with any cluster maintenance, it is recommended that you first take an etcd snapshot of the cluster, to recover from in the event of an issue. A snapshot can be created for the cluster, per the Rancher documentation here and you should copy the snapshot off an etcd node to a safe location outside the cluster, if S3 backups are not configured for the cluster.

Perform migration on each cluster node in turn

  1. Check if you should first add an additional node to the cluster, to replace the node during its migration:

    Controlplane or etcd nodes In the case that the node is a controlplane or etcd node, it is recommend that you first add an additional node to replace this, to ensure that quorum is maintained in the event of failure of another node during the process. If the node is the single etcd or controlplane node in the cluster, then adding an additional node to replace it is not an optional step. Add the new etcd and/or controlplane role node by running the Rancher agent command from the 'Edit Cluster' view, with the appropriate roles, on the replacement node.

    Worker nodes If the worker nodes within the cluster are heavily loaded, or if the node is the sole worker role node, you should provision an additional worker node, to replace the node during the migration. Add the new worker role node by running the Rancher agent command from the 'Edit Cluster' view, with the worker role, on the replacement node.

  2. Remove the node which you are migrating from the cluster, to do so delete it from the node list for the cluster within Rancher.

  3. Once the cluster reconciliation triggered by step 2. is complete, and the cluster no longer shows as updating within Rancher, run the Extended Rancher 2 cleanup script on the node that you are migrating to clean up Rancher state.
  4. Switch to the upstream Docker package on the node, by following the Docker Engine installation documentation for CentOS from the section Uninstall old versions here.
  5. Add the node back by running the Rancher agent command from the 'Edit Cluster' view, with the appropriate roles, on the node.
Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.