How to edit the upstream nameservers used by CoreDNS or kube-dns, in a Rancher Kubernetes Engine (RKE) or Rancher v2.x provisioned Kubernetes cluster

Follow
Table of Contents

Task

By default, CoreDNS and kube-dns Pods will inherit the nameserver configuration from the node. In certain circumstances it might be desired to override this, and use a specific set of nameservers for external queries.

Note: These steps update the nameservers only for Pods that use either the ClusterFirst (default) or ClusterFirstWithHostNet DNS policy. Nameserver configuration for nodes and other Pods will not be affected.

Pre-requisites

  • A Kubernetes cluster provisioned by the Rancher Kubernetes Engine (RKE) CLI or Rancher v2.x, with the CoreDNS or kube-dns dns addon enabled.

Note: New clusters can also be created using the same steps.

Steps

Option A: Update the cluster.yaml

The cluster configuration YAML provides the upstreamnameservers option, to configure a list of upstream nameservers, per the example below:

  1. Add the upstreamnameservers option, with the list of nameservers, to the cluster configuration YAML. For RKE provisioned clusters, add this into the cluster.yml file. For a Rancher provisioned cluster, navigate to the cluster view in the Rancher UI, open the edit cluster view and click Edit as YAML.

    dns:
      provider: coredns
      upstreamnameservers:
      - 1.1.1.1
      - 8.8.8.8

  2. Update the cluster with the new configuration. For RKE provisioned clusters, invoke rke up --cluster.yml (ensure the cluster.rkestate file is present in the working directory when invoking rke up). For Rancher provisioned clusters, click Save in the Rancher UI Edit as YAML view.

Note: This option is recommended as it requires minimal change, see the RKE add-ons documentation for more information.

Option B: Update the kubelet resolv.conf

By default, the kubelet will refer to the /etc/resolv.conf file as the source for nameserver configuration.

It is possible to override this by adding an extra_args option to the kubelet service, and this is also accomplished in the cluster configuration YAML.

A custom resolv.conf file can then be used by the kubelet instead, per the example below:

  1. On each of the nodes in the cluster create the custom nameserver configuration file:

    echo "nameserver 8.8.8.8" > /etc/k8s-resolv.conf

  2. Add resolv-conf, referencing the custom nameserver configuration file, to the extra_args option for the kubelet service, in the cluster configuration YAML. For RKE provisioned clusters, add this into the cluster.yml file. For a Rancher provisioned cluster, navigate to the cluster view in the Rancher UI, open the edit cluster view and click Edit as YAML.

    services:
      kubelet:
        extra_args:
          resolv-conf: /host/etc/k8s-resolv.conf
  3. Update the cluster with the new configuration. For RKE provisioned clusters, invoke rke up --cluster.yml (ensure the cluster.rkestate file is present in the working directory when invoking rke up). For Rancher provisioned clusters, click Save in the Rancher UI Edit as YAML view.

See the RKE services documentation for more information.

Note: kubelet flags are being updated, as such a restart of the kubelet component will occur on each node.

Option C: Update the node resolv.conf

If the nameserver configuration should be consistent between the OS and Kubernetes Pods, updating the node /etc/resolv.conf file is recommended.

This could be because nameservers are changing or that the caching configuration (for example systemd-resolved) is not desired.

Changes to a systemd managed resolv.conf can be dependent on the Linux distribution and you should refer to the documentation for the distribution used in the cluster.

Note: The kubelet component caches the /etc/resolv.conf file at start time, as such a restart of the kubelet component needs to occur on each node manually, after updating the configuration.

This can be accomplished a number of ways:

  • docker restart kubelet on each node
  • A drain and restart of each node
  • Replacing nodes in the cluster with the updated configuration
Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.