Merge pull request #3215 from catherineluse/priorityclass

Document priority class name options for RKE addons
This commit is contained in:
Catherine Luse
2021-04-23 11:46:50 -07:00
committed by GitHub
5 changed files with 197 additions and 49 deletions
@@ -18,7 +18,12 @@ There are a few things worth noting:
* As of v0.1.8, RKE will update an add-on if it is the same name.
* Before v0.1.8, update any add-ons by using `kubectl edit`.
## Critical and Non-Critical Add-ons
- [Critical and Non-Critical Add-ons](#critical-and-non-critical-add-ons)
- [Add-on Deployment Jobs](#add-on-deployment-jobs)
- [Add-on Placement](#add-on-placement)
- [Tolerations](#tolerations)
# Critical and Non-Critical Add-ons
As of version v0.1.7, add-ons are split into two categories:
@@ -27,7 +32,7 @@ As of version v0.1.7, add-ons are split into two categories:
Currently, only the [network plug-in]({{<baseurl>}}/rke/latest/en/config-options/add-ons/network-plugins/) is considered critical. KubeDNS, [ingress controllers]({{<baseurl>}}/rke/latest/en/config-options/add-ons/ingress-controllers/) and [user-defined add-ons]({{<baseurl>}}/rke/latest/en/config-options/add-ons/user-defined-add-ons/) are considered non-critical.
## Add-on deployment jobs
# Add-on Deployment Jobs
RKE uses Kubernetes jobs to deploy add-ons. In some cases, add-ons deployment takes longer than expected. As of with version v0.1.7, RKE provides an option to control the job check timeout in seconds. This timeout is set at the cluster level.
@@ -35,7 +40,7 @@ RKE uses Kubernetes jobs to deploy add-ons. In some cases, add-ons deployment ta
addon_job_timeout: 30
```
## Add-on placement
# Add-on Placement
_Applies to v0.2.3 and higher_
@@ -50,7 +55,7 @@ _Applies to v0.2.3 and higher_
| nginx-ingress | - `beta.kubernetes.io/os:NotIn:windows`<br/>- `node-role.kubernetes.io/worker` `Exists` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists` |
| metrics-server | - `beta.kubernetes.io/os:NotIn:windows`<br/>- `node-role.kubernetes.io/worker` `Exists` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists` |
## Tolerations
# Tolerations
_Available as of v1.2.4_
@@ -3,6 +3,25 @@ title: DNS providers
weight: 262
---
- [Available DNS Providers](#available-dns-providers)
- [Disabling deployment of a DNS Provider](#disabling-deployment-of-a-dns-provider)
- [CoreDNS](#coredns)
- [Scheduling CoreDNS](#scheduling-coredns)
- [Upstream nameservers](#coredns-upstream-nameservers)
- [Priority Class Name](#coredns-priority-class-name)
- [Tolerations](#coredns-tolerations)
- [kube-dns](#kube-dns)
- [Scheduling kube-dns](#scheduling-kube-dns)
- [Upstream nameservers](#kube-dns-upstream-nameservers)
- [Priority Class Name](#kube-dns-priority-class-name)
- [Tolerations](#kube-dns-tolerations)
- [NodeLocal DNS](#nodelocal-dns)
- [Configuring NodeLocal DNS](#configuring-nodelocal-dns)
- [Priority Class Name](#nodelocal-priority-class-name)
- [Removing NodeLocal DNS](#removing-nodelocal-dns)
# Available DNS Providers
RKE provides the following DNS providers that can be deployed as add-ons:
* [CoreDNS](https://coredns.io)
@@ -18,6 +37,17 @@ CoreDNS was made the default in RKE v0.2.5 when using Kubernetes 1.14 and higher
> **Note:** If you switch from one DNS provider to another, the existing DNS provider will be removed before the new one is deployed.
# Disabling Deployment of a DNS Provider
_Available as of v0.2.0_
You can disable the default DNS provider by specifying `none` to the dns `provider` directive in the cluster configuration. Be aware that this will prevent your pods from doing name resolution in your cluster.
```yaml
dns:
provider: none
```
# CoreDNS
_Available as of v0.2.5_
@@ -28,7 +58,7 @@ RKE will deploy CoreDNS as a Deployment with the default replica count of 1. The
The images used for CoreDNS are under the [`system_images` directive]({{<baseurl>}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with CoreDNS, but these can be overridden by changing the image tag in `system_images`.
## Scheduling CoreDNS
### Scheduling CoreDNS
If you only want the CoreDNS pod to be deployed on specific nodes, you can set a `node_selector` in the `dns` section. The label in the `node_selector` would need to match the label on the nodes for the CoreDNS pod to be deployed.
@@ -46,9 +76,8 @@ dns:
app: dns
```
## Configuring CoreDNS
### Upstream nameservers
### CoreDNS Upstream nameservers
By default, CoreDNS will use the host configured nameservers (usually residing at `/etc/resolv.conf`) to resolve external queries. If you want to configure specific upstream nameservers to be used by CoreDNS, you can use the `upstreamnameservers` directive.
@@ -62,13 +91,28 @@ dns:
- 8.8.4.4
```
### Tolerations
### CoreDNS Priority Class Name
_Available as of RKE v1.2.6+_
The [pod priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority) is set by configuring a priority class name under `options`:
```yaml
dns:
options:
coredns_autoscaler_priority_class_name: system-cluster-critical
coredns_priority_class_name: system-cluster-critical
provider: coredns
```
### CoreDNS Tolerations
_Available as of v1.2.4_
The configured tolerations apply to the `coredns` and the `coredns-autoscaler` Deployment.
```
```yaml
dns:
provider: coredns
tolerations:
@@ -95,7 +139,7 @@ RKE will deploy kube-dns as a Deployment with the default replica count of 1. Th
The images used for kube-dns are under the [`system_images` directive]({{<baseurl>}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with kube-dns, but these can be overridden by changing the image tag in `system_images`.
## Scheduling kube-dns
### Scheduling kube-dns
_Available as of v0.2.0_
@@ -115,9 +159,7 @@ dns:
app: dns
```
## Configuring kube-dns
### Upstream nameservers
### kube-dns Upstream nameservers
_Available as of v0.2.0_
@@ -133,13 +175,28 @@ dns:
- 8.8.4.4
```
### Tolerations
### kube-dns Priority Class Name
_Available as of RKE v1.2.6+_
The [pod priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority) is set by configuring a priority class name under `options`:
```yaml
dns:
options:
kube_dns_autoscaler_priority_class_name: system-cluster-critical
kube_dns_priority_class_name: system-cluster-critical
provider: kube-dns
```
### kube-dns Tolerations
_Available as of v1.2.4_
The configured tolerations apply to the `kube-dns` and the `kube-dns-autoscaler` Deployment.
```
```yaml
dns:
provider: kube-dns
tolerations:
@@ -161,16 +218,7 @@ kubectl get deploy kube-dns -n kube-system -o jsonpath='{.spec.template.spec.tol
kubectl get deploy kube-dns-autoscaler -n kube-system -o jsonpath='{.spec.template.spec.tolerations}'
```
# Disabling deployment of a DNS provider
_Available as of v0.2.0_
You can disable the default DNS provider by specifying `none` to the dns `provider` directive in the cluster configuration. Be aware that this will prevent your pods from doing name resolution in your cluster.
```yaml
dns:
provider: none
```
# NodeLocal DNS
@@ -186,7 +234,7 @@ NodeLocal DNS is an additional component that can be deployed on each node to im
Enable NodeLocal DNS by configuring an IP address.
## Configuring NodeLocal DNS
### Configuring NodeLocal DNS
The `ip_address` parameter is used to configure what link-local IP address will be configured one each host to listen on, make sure this IP address is not already configured on the host.
@@ -199,7 +247,21 @@ dns:
> **Note:** When enabling NodeLocal DNS on an existing cluster, pods that are currently running will not be modified, the updated `/etc/resolv.conf` configuration will take effect only for pods started after enabling NodeLocal DNS.
## Removing NodeLocal DNS
### NodeLocal Priority Class Name
_Available as of RKE v1.2.6+_
The [pod priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority) is set by configuring a priority class name under `options`:
```yaml
dns:
options:
nodelocal_autoscaler_priority_class_name: system-cluster-critical
nodelocal_priority_class_name: system-cluster-critical
provider: coredns # a DNS provider must be configured
```
### Removing NodeLocal DNS
By removing the `ip_address` value, NodeLocal DNS will be removed from the cluster.
@@ -4,6 +4,17 @@ description: By default, RKE deploys the NGINX ingress controller. Learn how to
weight: 262
---
- [Default Ingress](#default-ingress)
- [Scheduling Ingress Controllers](#scheduling-ingress-controllers)
- [Ingress Priority Class Name](#ingress-priority-class-name)
- [Tolerations](#tolerations)
- [Disabling the Default Ingress Controller](#disabling-the-default-ingress-controller)
- [Configuring NGINX Ingress Controller](#configuring-nginx-ingress-controller)
- [Disabling NGINX Ingress Default Backend](#disabling-nginx-ingress-default-backend)
- [Configuring an NGINX Default Certificate](#configuring-an-nginx-default-certificate)
### Default Ingress
By default, RKE deploys the NGINX ingress controller on all schedulable nodes.
> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but before v0.1.8, worker and controlplane nodes were considered schedulable nodes.
@@ -12,7 +23,7 @@ RKE will deploy the ingress controller as a DaemonSet with `hostnetwork: true`,
The images used for ingress controller is under the [`system_images` directive]({{<baseurl>}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with the ingress controller, but these can be overridden by changing the image tag in `system_images`.
## Scheduling Ingress Controllers
### Scheduling Ingress Controllers
If you only wanted ingress controllers to be deployed on specific nodes, you can set a `node_selector` for the ingress. The label in the `node_selector` would need to match the label on the nodes for the ingress controller to be deployed.
@@ -30,13 +41,25 @@ ingress:
app: ingress
```
## Tolerations
### Ingress Priority Class Name
_Available as of RKE v1.2.6+_
The [pod priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority) is set by configuring a priority class name:
```yaml
ingress:
provider: nginx
ingress_priority_class_name: system-cluster-critical
```
### Tolerations
_Available as of v1.2.4_
The configured tolerations apply to the `default-http-backend` Deployment.
```
```yaml
ingress:
tolerations:
- key: "node.kubernetes.io/unreachable"
@@ -55,7 +78,7 @@ To check for applied tolerations `default-http-backend` Deployment, use the foll
kubectl -n ingress-nginx get deploy default-http-backend -o jsonpath='{.spec.template.spec.tolerations}'
```
## Disabling the Default Ingress Controller
### Disabling the Default Ingress Controller
You can disable the default controller by specifying `none` to the ingress `provider` directive in the cluster configuration.
@@ -63,7 +86,7 @@ You can disable the default controller by specifying `none` to the ingress `pro
ingress:
provider: none
```
## Configuring NGINX Ingress Controller
### Configuring NGINX Ingress Controller
For the configuration of NGINX, there are configuration options available in Kubernetes. There are a [list of options for the NGINX config map](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md) , [command line extra_args](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md) and [annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/).
@@ -88,7 +111,7 @@ ingress:
> **What happens if the field is omitted?** The value of `default_backend` will default to `true`. This maintains behavior with older versions of `rke`. However, a future version of `rke` will change the default value to `false`.
## Configuring an NGINX Default Certificate
### Configuring an NGINX Default Certificate
When configuring an ingress object with TLS termination, you must provide it with a certificate used for encryption/decryption. Instead of explicitly defining a certificate each time you configure an ingress, you can set up a custom certificate that's used by default.
@@ -9,13 +9,17 @@ RKE will deploy Metrics Server as a Deployment.
The image used for Metrics Server is under the [`system_images` directive]({{<baseurl>}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there is a default image associated with the Metrics Server, but these can be overridden by changing the image tag in `system_images`.
## Tolerations
- [Tolerations](#tolerations)
- [Priority Class Name](#metrics-server-priority-class-name)
- [Disabling the Metrics Server](#disabling-the-metrics-server)
### Tolerations
_Available as of v1.2.4_
The configured tolerations apply to the `metrics-server` Deployment.
```
```yaml
monitoring:
tolerations:
- key: "node.kubernetes.io/unreachable"
@@ -34,7 +38,19 @@ To check for applied tolerations on the `metrics-server` Deployment, use the fol
kubectl -n kube-system get deploy metrics-server -o jsonpath='{.spec.template.spec.tolerations}'
```
## Disabling the Metrics Server
### Metrics Server Priority Class Name
_Available as of RKE v1.2.6+_
The [pod priority](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority) is set by configuring a priority class name:
```yaml
monitoring:
provider: metrics-server
metrics_server_priority_class_name: system-cluster-critical
```
### Disabling the Metrics Server
_Available as of v0.2.0_
@@ -10,7 +10,27 @@ RKE provides the following network plug-ins that are deployed as add-ons:
- Canal
- Weave
> **Note:** After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesnt allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications.
> After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesnt allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications.
- [Changing the Default Network Plug-in](#changing-the-default-network-plug-in)
- [Disabling Deployment of a Network Plug-in](#disabling-deployment-of-a-network-plug-in)
- [Network Plug-in Options](#network-plug-in-options)
- [Canal](#canal)
- [Canal Network Plug-in Options](#canal-network-plug-in-options)
- [Canal Interface](#canal-interface)
- [Canal Network Plug-in Tolerations](#canal-network-plug-in-tolerations)
- [Flannel](#flannel)
- [Flannel Network Plug-in Options](#flannel-network-plug-in-options)
- [Flannel Interface](#flannel-interface)
- [Calico](#calico)
- [Calico Network Plug-in Options](#calico-network-plug-in-options)
- [Calico Cloud Provider](#calico-cloud-provider)
- [Calico Network Plug-in Tolerations](#calico-network-plug-in-tolerations)
- [Weave](#weave)
- [Weave Network Plug-in Options](#weave-network-plug-in-options)
- [Custom Network Plug-ins](#custom-network-plug-ins)
# Changing the Default Network Plug-in
By default, the network plug-in is `canal`. If you want to use another network plug-in, you need to specify which network plug-in to enable at the cluster level in the `cluster.yml`.
@@ -35,7 +55,14 @@ network:
Besides the different images that could be used to deploy network plug-ins, certain network plug-ins support additional options that can be used to customize the network plug-in.
## Canal Network Plug-in Options
- [Canal](#canal)
- [Flannel](#flannel)
- [Calico](#calico)
- [Weave](#weave)
# Canal
### Canal Network Plug-in Options
```yaml
network:
@@ -43,20 +70,23 @@ network:
options:
canal_iface: eth1
canal_flannel_backend_type: vxlan
canal_autoscaler_priority_class_name: system-cluster-critical # Available as of RKE v1.2.6+
canal_priority_class_name: system-cluster-critical # Available as of RKE v1.2.6+
```
#### Canal Interface
### Canal Interface
By setting the `canal_iface`, you can configure the interface to use for inter-host communication.
The `canal_flannel_backend_type` option allows you to specify the type of [flannel backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md) to use. By default the `vxlan` backend is used.
## Canal Network Plug-in Tolerations
### Canal Network Plug-in Tolerations
_Available as of v1.2.4_
The configured tolerations apply to the `calico-kube-controllers` Deployment.
```
```yaml
network:
plugin: canal
tolerations:
@@ -76,7 +106,8 @@ To check for applied tolerations on the `calico-kube-controllers` Deployment, us
kubectl -n kube-system get deploy calico-kube-controllers -o jsonpath='{.spec.template.spec.tolerations}'
```
## Flannel Network Plug-in Options
# Flannel
### Flannel Network Plug-in Options
```yaml
network:
@@ -84,22 +115,29 @@ network:
options:
flannel_iface: eth1
flannel_backend_type: vxlan
flannel_autoscaler_priority_class_name: system-cluster-critical # Available as of RKE v1.2.6+
flannel_priority_class_name: system-cluster-critical # Available as of RKE v1.2.6+
```
#### Flannel Interface
### Flannel Interface
By setting the `flannel_iface`, you can configure the interface to use for inter-host communication.
The `flannel_backend_type` option allows you to specify the type of [flannel backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md) to use. By default the `vxlan` backend is used.
## Calico Network Plug-in Options
# Calico
### Calico Network Plug-in Options
```yaml
network:
plugin: calico
options:
calico_cloud_provider: aws
calico_autoscaler_priority_class_name: system-cluster-critical # Available as of RKE v1.2.6+
calico_priority_class_name: system-cluster-critical # Available as of RKE v1.2.6+
```
#### Calico Cloud Provider
### Calico Cloud Provider
Calico currently only supports 2 cloud providers, AWS or GCE, which can be set using `calico_cloud_provider`.
@@ -108,13 +146,13 @@ Calico currently only supports 2 cloud providers, AWS or GCE, which can be set u
- `aws`
- `gce`
## Calico Network Plug-in Tolerations
### Calico Network Plug-in Tolerations
_Available as of v1.2.4_
The configured tolerations apply to the `calico-kube-controllers` Deployment.
```
```yaml
network:
plugin: calico
tolerations:
@@ -134,19 +172,23 @@ To check for applied tolerations on the `calico-kube-controllers` Deployment, us
kubectl -n kube-system get deploy calico-kube-controllers -o jsonpath='{.spec.template.spec.tolerations}'
```
## Weave Network Plug-in Options
# Weave
### Weave Network Plug-in Options
```yaml
network:
plugin: weave
options:
weave_autoscaler_priority_class_name: system-cluster-critical # Available as of RKE v1.2.6+
weave_priority_class_name: system-cluster-critical # Available as of RKE v1.2.6+
weave_network_provider:
password: "Q]SZOQ5wp@n$oijz"
```
#### Weave encryption
### Weave Encryption
Weave encryption can be enabled by passing a string password to the network provider config.
## Custom Network Plug-ins
# Custom Network Plug-ins
It is possible to add a custom network plug-in by using the [user-defined add-on functionality]({{<baseurl>}}/rke/latest/en/config-options/add-ons/user-defined-add-ons/) of RKE. In the `addons` field, you can add the add-on manifest of a cluster that has the network plugin-that you want, as shown in [this example.]({{<baseurl>}}/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example)