From d9d22a8f8e154c2834d4033ff367d8bbd68f1845 Mon Sep 17 00:00:00 2001 From: tmiklu <61796331+tmiklu@users.noreply.github.com> Date: Tue, 29 Dec 2020 13:44:49 +0100 Subject: [PATCH 01/36] Change to correct name convention Current section does not contain option "custom" this is available under Existing Nodes --- .../deployment/quickstart-manual-setup/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md index f0ee9913026..b16b26988e7 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md @@ -76,7 +76,7 @@ In this task, you can use the versatile **Custom** option. This option lets you 1. From the **Clusters** page, click **Add Cluster**. -2. Choose **Custom**. +2. Choose **Existing Nodes**. 3. Enter a **Cluster Name**. From c0f02e626ee292c94bc9c747a1df2f7fdaa8b50e Mon Sep 17 00:00:00 2001 From: dkeightley <20566450+dkeightley@users.noreply.github.com> Date: Thu, 31 Dec 2020 12:16:57 +1300 Subject: [PATCH 02/36] Add note about RKE/k3s steps --- .../infrastructure-tutorials/ec2-node/_index.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index fafbf80e4e4..564ccdb49fb 100644 --- a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -35,9 +35,11 @@ If the Rancher server is installed in a single Docker container, you only need o 1. Choose a new or existing key pair that you will use to connect to your instance later. If you are using an existing key pair, make sure you already have access to the private key. 1. Click **Launch Instances.** -**Result:** You have created Rancher nodes that satisfy the requirements for OS, hardware, and networking. Next, you will install Docker on each node. +**Result:** You have created Rancher nodes that satisfy the requirements for OS, hardware, and networking. -### 3. Install Docker and Create User +**Note:** If the nodes are being used for an RKE Kubernetes cluster, install Docker on each node in the next step. For a K3s Kubernetes cluster, the nodes are now ready to install K3s. + +### 3. Install Docker and Create User for RKE Kubernetes Cluster Nodes 1. From the [AWS EC2 console,](https://console.aws.amazon.com/ec2/) click **Instances** in the left panel. 1. Go to the instance that you want to install Docker on. Select the instance and click **Actions > Connect.** From f7176dc614bbd001552da572382d42e935ae9c50 Mon Sep 17 00:00:00 2001 From: Wenhan Shi Date: Tue, 26 Jan 2021 13:01:52 +0900 Subject: [PATCH 03/36] Update _index.md Rancher install command should also have no_proxy settings due to https://rancher.com/docs/rancher/v2.x/en/installation/install-rancher-on-k8s/chart-options/#http-proxy --- .../behind-proxy/install-rancher/_index.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md index 532abe59d5d..24ae1209d8f 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md @@ -65,6 +65,7 @@ helm upgrade --install rancher rancher-latest/rancher \ --namespace cattle-system \ --set hostname=rancher.example.com \ --set proxy=http://${proxy_host} + --set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local ``` After waiting for the deployment to finish: From b4c5e00cd5955c4442f2bbbba5e00de02daababe Mon Sep 17 00:00:00 2001 From: Ansil H Date: Thu, 28 Jan 2021 22:29:34 +0530 Subject: [PATCH 04/36] Added reminder to convert keys to base64 Added reminder to convert keys to base64 --- .../en/backups/v2.5/configuration/backup-config/_index.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md b/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md index d86ea7fcb64..a083f741b66 100644 --- a/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md @@ -139,6 +139,12 @@ data: secretKey: ``` +Make sure to encode the keys in base64 in YAML file. +Run the following command to encode the keys in base64. +``` +echo -n "your_key" |base64 +``` + ### IAM Permissions for EC2 Nodes to Access S3 There are two ways to set up the `rancher-backup` operator to use S3 as the backup storage location. @@ -182,4 +188,4 @@ After the role is created, and you have attached the corresponding instance prof # Examples -For example Backup custom resources, refer to [this page.](../../examples/#backup) \ No newline at end of file +For example Backup custom resources, refer to [this page.](../../examples/#backup) From cf39c215fadf594a2d386cfb8d89345796c8593d Mon Sep 17 00:00:00 2001 From: Ansil H Date: Thu, 28 Jan 2021 22:30:52 +0530 Subject: [PATCH 05/36] Update _index.md --- .../en/backups/v2.5/configuration/backup-config/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md b/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md index a083f741b66..930d5339946 100644 --- a/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md @@ -139,8 +139,8 @@ data: secretKey: ``` -Make sure to encode the keys in base64 in YAML file. -Run the following command to encode the keys in base64. +Make sure to encode the keys to base64 in YAML file. +Run the following command to encode the keys. ``` echo -n "your_key" |base64 ``` From 2e0a109b0b689994386f16c35a53b9c4b4bef4e8 Mon Sep 17 00:00:00 2001 From: NoeB Date: Sat, 30 Jan 2021 14:39:37 +0100 Subject: [PATCH 06/36] Update k3s storage page with arm64 note Updated k3s storage note that longhorn also supports arm64 experimental. --- content/k3s/latest/en/storage/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/k3s/latest/en/storage/_index.md b/content/k3s/latest/en/storage/_index.md index 760bd893fff..fd0dcba1168 100644 --- a/content/k3s/latest/en/storage/_index.md +++ b/content/k3s/latest/en/storage/_index.md @@ -75,7 +75,7 @@ The status should be Bound for each. [comment]: <> (pending change - longhorn may support arm64 and armhf in the future.) -> **Note:** At this time Longhorn only supports amd64. +> **Note:** At this time Longhorn only supports amd64 and arm64 (experimental). K3s supports [Longhorn](https://github.com/longhorn/longhorn). Longhorn is an open-source distributed block storage system for Kubernetes. From 4787f6a0c71269c02cadbd3db59fb9ec42441e19 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Va=C5=A1ek=20Chalupn=C3=AD=C4=8Dek?= Date: Mon, 1 Feb 2021 16:03:51 +0100 Subject: [PATCH 07/36] Fixing misleading instruction. Original text is in clash with next step of choosing PVC created under section no. 2. --- .../volumes-and-storage/provisioning-new-storage/_index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md index c67156c1d36..2feb6a07c61 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md @@ -94,7 +94,7 @@ To attach the PVC to a new workload, 1. Create a workload as you would in [Deploying Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). 1. For **Workload Type**, select **Stateful set of 1 pod**. -1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).** +1. Expand the **Volumes** section and click **Add Volume > Use an Existing Persistent Volume (Claim).** 1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch.** @@ -105,9 +105,9 @@ To attach the PVC to an existing workload, 1. Go to the project that has the workload that will have the PVC attached. 1. Go to the workload that will have persistent storage and click **⋮ > Edit.** -1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).** +1. Expand the **Volumes** section and click **Add Volume > Use an Existing Persistent Volume (Claim).** 1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Save.** -**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage. \ No newline at end of file +**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage. From a3fe082335a804d17170a82e5ecc30c2f02e0b4e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Va=C5=A1ek=20Chalupn=C3=AD=C4=8Dek?= Date: Mon, 1 Feb 2021 16:09:14 +0100 Subject: [PATCH 08/36] Making instructions clearer I was stuck myself when following instructions as the section no. 1 refers to the Cluster Explorer and in section no. 2 one needs to go to the Cluster Manager. I think it's a good idea to make this clear in step no. 1 of each of these sections. --- .../volumes-and-storage/provisioning-new-storage/_index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md index c67156c1d36..c0e514cd5f5 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md @@ -44,7 +44,7 @@ To use a storage provisioner that is not on the above list, you will need to use These steps describe how to set up a storage class at the cluster level. -1. Go to the cluster for which you want to dynamically provision persistent storage volumes. +1. Go to the **Cluster Explorer** of the cluster for which you want to dynamically provision persistent storage volumes. 1. From the cluster view, select `Storage > Storage Classes`. Click `Add Class`. @@ -64,7 +64,7 @@ For full information about the storage class parameters, refer to the official [ These steps describe how to set up a PVC in the namespace where your stateful workload will be deployed. -1. Go to the project containing a workload that you want to add a PVC to. +1. Go to the **Cluster Manager** to the project containing a workload that you want to add a PVC to. 1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**. @@ -110,4 +110,4 @@ To attach the PVC to an existing workload, 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Save.** -**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage. \ No newline at end of file +**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage. From 9a5ba5ff1b99bbab51df13b77efbf2cc47102855 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 3 Feb 2021 20:23:33 -0700 Subject: [PATCH 09/36] Change order/section titles in installation docs --- .../v2.x/en/installation/install-rancher-on-k8s/_index.md | 4 ++-- .../v2.x/en/installation/install-rancher-on-linux/_index.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md index c81bf508624..bbc59c11ec8 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md @@ -1,7 +1,7 @@ --- -title: Install Rancher on a Kubernetes Cluster +title: Install/Upgrade Rancher on a Kubernetes Cluster description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation -weight: 3 +weight: 2 aliases: - /rancher/v2.x/en/installation/k8s-install/ - /rancher/v2.x/en/installation/k8s-install/helm-rancher diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md index 487f6058742..c7c69757429 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md @@ -1,6 +1,6 @@ --- -title: Install Rancher on a Linux OS -weight: 2 +title: Install/Upgrade Rancher on a Linux OS +weight: 3 --- _Available as of Rancher v2.5.4_ From 5d0caa4974e89f08187c56f7e6da709c1105dee9 Mon Sep 17 00:00:00 2001 From: Devin Date: Sun, 7 Feb 2021 17:18:50 +0200 Subject: [PATCH 10/36] Typo in docs Small change from Docket to Docker --- .../resources/advanced/rke-add-on/layer-4-lb/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/_index.md index 276e043168f..f23527b9096 100644 --- a/content/rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/_index.md @@ -181,7 +181,7 @@ Once you have the `rancher-cluster.yml` config file template, edit the nodes sec 1. Update the `nodes` section with the information of your [Linux hosts](#1-provision-linux-hosts). - For each node in your cluster, update the following placeholders: `IP_ADDRESS_X` and `USER`. The specified user should be able to access the Docket socket, you can test this by logging in with the specified user and run `docker ps`. + For each node in your cluster, update the following placeholders: `IP_ADDRESS_X` and `USER`. The specified user should be able to access the Docker socket, you can test this by logging in with the specified user and run `docker ps`. >**Note:** > When using RHEL/CentOS, the SSH user can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565. See [Operating System Requirements]({{}}/rke/latest/en/installation/os#redhat-enterprise-linux-rhel-centos) >for RHEL/CentOS specific requirements. From 0680c978e3a8cb60b40be64db8398533fa91ecfd Mon Sep 17 00:00:00 2001 From: Erik Wilson Date: Mon, 8 Feb 2021 09:17:28 -0700 Subject: [PATCH 11/36] Update k3s node password docs for new secrets --- content/k3s/latest/en/architecture/_index.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/content/k3s/latest/en/architecture/_index.md b/content/k3s/latest/en/architecture/_index.md index 7418199bcce..ea1ef117f21 100644 --- a/content/k3s/latest/en/architecture/_index.md +++ b/content/k3s/latest/en/architecture/_index.md @@ -48,7 +48,9 @@ After registration, the agent nodes establish a connection directly to one of th Agent nodes are registered with a websocket connection initiated by the `k3s agent` process, and the connection is maintained by a client-side load balancer running as part of the agent process. -Agents will register with the server using the node cluster secret along with a randomly generated password for the node, stored at `/etc/rancher/node/password`. The server will store the passwords for individual nodes at `/var/lib/rancher/k3s/server/cred/node-passwd`, and any subsequent attempts must use the same password. +Agents will register with the server using the node cluster secret along with a randomly generated password for the node, stored at `/etc/rancher/node/password`. The server will store the passwords for individual nodes as Kubernetes secrets, and any subsequent attempts must use the same password. Node password secrets are stored in the `kube-system` namespace with names using the template `.node-password.k3s`. + +Note: Prior to K3s v1.20.2 servers stored passwords on disk at `/var/lib/rancher/k3s/server/cred/node-passwd`. If the `/etc/rancher/node` directory of an agent is removed, the password file should be recreated for the agent, or the entry removed from the server. From 5e84ad19c6a0c06a92b0c7d71261c4222a2b27d1 Mon Sep 17 00:00:00 2001 From: Tom Murphy Date: Tue, 9 Feb 2021 09:18:57 +1100 Subject: [PATCH 12/36] Added doc note FW rule for GKE private clusters --- .../v2.x/en/installation/requirements/ports/_index.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/content/rancher/v2.x/en/installation/requirements/ports/_index.md b/content/rancher/v2.x/en/installation/requirements/ports/_index.md index 10d08d6349a..e6966253d21 100644 --- a/content/rancher/v2.x/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/ports/_index.md @@ -168,6 +168,14 @@ The following tables break down the port requirements for Rancher nodes, for inb {{% /accordion %}} +### Ports for Rancher Server in GCP GKE + +When deploying Rancher into a Google Kubernetes Engine [private cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters), the nodes where Rancher runs must be accessible from the control plane: + +| Protocol | Port | Source | Description | +|-----|-----|----------------|---| +| TCP | 9443 | The GKE master `/28` range | Rancher webhooks | + # Downstream Kubernetes Cluster Nodes Downstream Kubernetes clusters run your apps and services. This section describes what ports need to be opened on the nodes in downstream clusters so that Rancher can communicate with them. From b9296ea88ae956448c491aab5c3ce7e45155f0bf Mon Sep 17 00:00:00 2001 From: Napsty Date: Tue, 9 Feb 2021 08:23:55 +0100 Subject: [PATCH 13/36] Issue 25580 --- .../install-rancher-on-k8s/upgrades/helm2/_index.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md index d7f7091919e..0336561e096 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md @@ -118,9 +118,11 @@ If you are currently running the cert-manger whose version is older than v0.11, 1. Uninstall Rancher ``` - helm delete rancher -n cattle-system + helm delete rancher ``` +In case this results in an error that the release "rancher" was not found, make sure you are using the correct deployment name. Use `helm list` to list the helm-deployed releases. + 2. Uninstall and reinstall `cert-manager` according to the instructions on the [Upgrading Cert-Manager]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions) page. 3. Reinstall Rancher to the latest version with all your settings. Take all the values from the step 1 and append them to the command using `--set key=value`. Note: There will be many more options from the step 1 that need to be appended. From 7bb07cf46fa518b219385ac48930c733e7ffec76 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 5 Feb 2021 12:53:09 -0700 Subject: [PATCH 14/36] Update rollback and restore docs --- .../restore/rke-restore/v2.0-v2.1/_index.md | 75 +++++++++ .../backups/v2.5/restoring-rancher/_index.md | 35 +---- .../rollbacks/_index.md | 143 +++++++++--------- 3 files changed, 149 insertions(+), 104 deletions(-) create mode 100644 content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1/_index.md diff --git a/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1/_index.md b/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1/_index.md new file mode 100644 index 00000000000..4838d0942a8 --- /dev/null +++ b/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1/_index.md @@ -0,0 +1,75 @@ +--- +title: "Rolling back to v2.0.0-v2.1.5" +weight: 1 +--- + +> Rolling back to Rancher v2.0-v2.1 is no longer supported. The instructions for rolling back to these versions are preserved here and are intended to be used only in cases where upgrading to Rancher v2.2+ is not feasible. + +If you are rolling back to versions in either of these scenarios, you must follow some extra instructions in order to get your clusters working. + +- Rolling back from v2.1.6+ to any version between v2.1.0 - v2.1.5 or v2.0.0 - v2.0.10. +- Rolling back from v2.0.11+ to any version between v2.0.0 - v2.0.10. + +Because of the changes necessary to address [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321), special steps are necessary if the user wants to roll back to a previous version of Rancher where this vulnerability exists. The steps are as follows: + +1. Record the `serviceAccountToken` for each cluster. To do this, save the following script on a machine with `kubectl` access to the Rancher management plane and execute it. You will need to run these commands on the machine where the rancher container is running. Ensure JQ is installed before running the command. The commands will vary depending on how you installed Rancher. + + **Rancher Installed with Docker** + ``` + docker exec kubectl get clusters -o json | jq '[.items[] | select(any(.status.conditions[]; .type == "ServiceAccountMigrated")) | {name: .metadata.name, token: .status.serviceAccountToken}]' > tokens.json + ``` + + **Rancher Installed on a Kubernetes Cluster** + ``` + kubectl get clusters -o json | jq '[.items[] | select(any(.status.conditions[]; .type == "ServiceAccountMigrated")) | {name: .metadata.name, token: .status.serviceAccountToken}]' > tokens.json + ``` + +2. After executing the command a `tokens.json` file will be created. Important! Back up this file in a safe place.** You will need it to restore functionality to your clusters after rolling back Rancher. **If you lose this file, you may lose access to your clusters.** + +3. Rollback Rancher following the [normal instructions]({{}}/rancher/v2.x/en/upgrades/rollbacks/). + +4. Once Rancher comes back up, every cluster managed by Rancher (except for Imported clusters) will be in an `Unavailable` state. + +5. Apply the backed up tokens based on how you installed Rancher. + + **Rancher Installed with Docker** + + Save the following script as `apply_tokens.sh` to the machine where the Rancher docker container is running. Also copy the `tokens.json` file created previously to the same directory as the script. + ``` + set -e + + tokens=$(jq .[] -c tokens.json) + for token in $tokens; do + name=$(echo $token | jq -r .name) + value=$(echo $token | jq -r .token) + + docker exec $1 kubectl patch --type=merge clusters $name -p "{\"status\": {\"serviceAccountToken\": \"$value\"}}" + done + ``` + the script to allow execution (`chmod +x apply_tokens.sh`) and execute the script as follows: + ``` + ./apply_tokens.sh + ``` + After a few moments the clusters will go from Unavailable back to Available. + + **Rancher Installed on a Kubernetes Cluster** + + Save the following script as `apply_tokens.sh` to a machine with kubectl access to the Rancher management plane. Also copy the `tokens.json` file created previously to the same directory as the script. + ``` + set -e + + tokens=$(jq .[] -c tokens.json) + for token in $tokens; do + name=$(echo $token | jq -r .name) + value=$(echo $token | jq -r .token) + + kubectl patch --type=merge clusters $name -p "{\"status\": {\"serviceAccountToken\": \"$value\"}}" + done + ``` + Set the script to allow execution (`chmod +x apply_tokens.sh`) and execute the script as follows: + ``` + ./apply_tokens.sh + ``` + After a few moments the clusters will go from `Unavailable` back to `Available`. + +6. Continue using Rancher as normal. diff --git a/content/rancher/v2.x/en/backups/v2.5/restoring-rancher/_index.md b/content/rancher/v2.x/en/backups/v2.5/restoring-rancher/_index.md index 3d07d4e56cc..0bc2cfd3676 100644 --- a/content/rancher/v2.x/en/backups/v2.5/restoring-rancher/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/restoring-rancher/_index.md @@ -13,9 +13,7 @@ A restore is performed by creating a Restore custom resource. > * Follow the instructions from this page for restoring rancher on the same cluster where it was backed up from. In order to migrate rancher to a new cluster, follow the steps to [migrate rancher.](../migrating-rancher) > * While restoring rancher on the same setup, the operator will scale down the rancher deployment when restore starts, and it will scale back up the deployment once restore completes. So Rancher will be unavailable during the restore. -First, create the Restore custom resource. Then restart Rancher using the previous Rancher version. - -### 1. Create the Restore Custom Resource +### Create the Restore Custom Resource 1. In the **Cluster Explorer,** go to the dropdown menu in the upper left corner and click **Rancher Backups.** 1. Click **Restore.** @@ -44,7 +42,7 @@ First, create the Restore custom resource. Then restart Rancher using the previo 1. Click **Create.** -The rancher-operator scales down the rancher deployment during restore, and scales it back up once the restore completes. The resources are restored in this order: +**Result:** The rancher-operator scales down the rancher deployment during restore, and scales it back up once the restore completes. The resources are restored in this order: 1. Custom Resource Definitions (CRDs) 2. Cluster-scoped resources @@ -55,33 +53,4 @@ To check how the restore is progressing, you can check the logs of the operator. ```yaml kubectl get pods -n cattle-resources-system kubectl logs -n cattle-resources-system -f -``` - -2. Restart Rancher - -Rancher has to be started with the lower/previous version after a rollback using the Rancher backup operator. It should be started with the same Helm chart values as the previous install. - -Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed: - -``` -helm get values rancher -n cattle-system - -hostname: rancher.my.org -``` - -> **Note:** There will be more values that are listed with this command. This is just an example of one of the values. - -Alternatively, it's possible to export the current values to a file and reference that file during upgrade. For example, to only change the Rancher version: - -``` -helm get values rancher -n cattle-system -o yaml > values.yaml -``` - -Then upgrade the Helm chart to the previous Rancher version, using the previous values. In this example, the values are taken from the file: - -``` -helm upgrade rancher rancher-/rancher \ - --namespace cattle-system \ - -f values.yaml \ - --version=X.Y.Z ``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/rollbacks/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/rollbacks/_index.md index 387be38d940..9758dd07086 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/rollbacks/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/rollbacks/_index.md @@ -10,81 +10,82 @@ aliases: - /rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades-rollbacks/rollbacks --- -To roll back to Rancher v2.5.0+, use the `rancher-backup` application and restore Rancher from backup according to [this section.]({{}}/rancher/v2.x/en/backups/v2.5/restoring-rancher/) Rancher has to be started with the lower/previous version after a rollback using the Rancher backup operator. +- [Rolling Back to Rancher v2.5.0+](#rolling-back-to-rancher-v2-5-0) +- [Rolling Back to Rancher v2.2-v2.4+](#rolling-back-to-rancher-v2-2-v2-4) +- [Rolling Back to Rancher v2.0-v2.1](#rolling-back-to-rancher-v2-0-v2-1) -To roll back to Rancher prior to v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{}}/rancher/v2.x/en/backups/restorations/ha-restoration) Restoring a snapshot of the Rancher Server cluster will revert Rancher to the version and state at the time of the snapshot. +# Rolling Back to Rancher v2.5.0+ + +To roll back to Rancher v2.5.0+, use the `rancher-backup` application and restore Rancher from backup. + +Rancher has to be started with the lower/previous version after a rollback. + +A restore is performed by creating a Restore custom resource. + +> **Important** +> +> * Follow the instructions from this page for restoring rancher on the same cluster where it was backed up from. In order to migrate rancher to a new cluster, follow the steps to [migrate rancher.](../migrating-rancher) +> * While restoring rancher on the same setup, the operator will scale down the rancher deployment when restore starts, and it will scale back up the deployment once restore completes. So Rancher will be unavailable during the restore. + +### Create the Restore Custom Resource + +1. In the **Cluster Explorer,** go to the dropdown menu in the upper left corner and click **Rancher Backups.** +1. Click **Restore.** +1. Create the Restore with the form, or with YAML. For creating the Restore resource using form, refer to the [configuration reference](../configuration/restore-config) and to the [examples.](../examples/#restore) +1. For using the YAML editor, we can click **Create > Create from YAML.** Enter the Restore YAML. + + ```yaml + apiVersion: resources.cattle.io/v1 + kind: Restore + metadata: + name: restore-migration + spec: + backupFilename: backup-b0450532-cee1-4aa1-a881-f5f48a007b1c-2020-09-15T07-27-09Z.tar.gz + encryptionConfigSecretName: encryptionconfig + storageLocation: + s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: rancher-backups + folder: rancher + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com + ``` + + For help configuring the Restore, refer to the [configuration reference](../configuration/restore-config) and to the [examples.](../examples/#restore) + +1. Click **Create.** + +**Result:** The rancher-operator scales down the rancher deployment during restore, and scales it back up once the restore completes. The resources are restored in this order: + +1. Custom Resource Definitions (CRDs) +2. Cluster-scoped resources +3. Namespaced resources + +To check how the restore is progressing, you can check the logs of the operator. Follow these steps to get the logs: + +```yaml +kubectl get pods -n cattle-resources-system +kubectl logs -n cattle-resources-system -f +``` + +### Roll back to the previous Rancher version + +Rancher can be rolled back using the Rancher UI. + +1. In the Rancher UI, go to the local cluster. +1. Go to the System project. +1. Edit Rancher deployment and modify image to version that you are rolling back to. +1. Save changes made. + +# Rolling Back to Rancher v2.2-v2.4+ + +To roll back to Rancher prior to v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{}}/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/) Restoring a snapshot of the Rancher server cluster will revert Rancher to the version and state at the time of the snapshot. For information on how to roll back Rancher installed with Docker, refer to [this page.]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks) > Managed clusters are authoritative for their state. This means restoring the rancher server will not revert workload deployments or changes made on managed clusters after the snapshot was taken. -### Rolling back to v2.0.0-v2.1.5 +# Rolling Back to Rancher v2.0-v2.1 -If you are rolling back to versions in either of these scenarios, you must follow some extra instructions in order to get your clusters working. - -- Rolling back from v2.1.6+ to any version between v2.1.0 - v2.1.5 or v2.0.0 - v2.0.10. -- Rolling back from v2.0.11+ to any version between v2.0.0 - v2.0.10. - -Because of the changes necessary to address [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321), special steps are necessary if the user wants to roll back to a previous version of Rancher where this vulnerability exists. The steps are as follows: - -1. Record the `serviceAccountToken` for each cluster. To do this, save the following script on a machine with `kubectl` access to the Rancher management plane and execute it. You will need to run these commands on the machine where the rancher container is running. Ensure JQ is installed before running the command. The commands will vary depending on how you installed Rancher. - - **Rancher Installed with Docker** - ``` - docker exec kubectl get clusters -o json | jq '[.items[] | select(any(.status.conditions[]; .type == "ServiceAccountMigrated")) | {name: .metadata.name, token: .status.serviceAccountToken}]' > tokens.json - ``` - - **Rancher Installed on a Kubernetes Cluster** - ``` - kubectl get clusters -o json | jq '[.items[] | select(any(.status.conditions[]; .type == "ServiceAccountMigrated")) | {name: .metadata.name, token: .status.serviceAccountToken}]' > tokens.json - ``` - -2. After executing the command a `tokens.json` file will be created. Important! Back up this file in a safe place.** You will need it to restore functionality to your clusters after rolling back Rancher. **If you lose this file, you may lose access to your clusters.** - -3. Rollback Rancher following the [normal instructions]({{}}/rancher/v2.x/en/upgrades/rollbacks/). - -4. Once Rancher comes back up, every cluster managed by Rancher (except for Imported clusters) will be in an `Unavailable` state. - -5. Apply the backed up tokens based on how you installed Rancher. - - **Rancher Installed with Docker** - - Save the following script as `apply_tokens.sh` to the machine where the Rancher docker container is running. Also copy the `tokens.json` file created previously to the same directory as the script. - ``` - set -e - - tokens=$(jq .[] -c tokens.json) - for token in $tokens; do - name=$(echo $token | jq -r .name) - value=$(echo $token | jq -r .token) - - docker exec $1 kubectl patch --type=merge clusters $name -p "{\"status\": {\"serviceAccountToken\": \"$value\"}}" - done - ``` - the script to allow execution (`chmod +x apply_tokens.sh`) and execute the script as follows: - ``` - ./apply_tokens.sh - ``` - After a few moments the clusters will go from Unavailable back to Available. - - **Rancher Installed on a Kubernetes Cluster** - - Save the following script as `apply_tokens.sh` to a machine with kubectl access to the Rancher management plane. Also copy the `tokens.json` file created previously to the same directory as the script. - ``` - set -e - - tokens=$(jq .[] -c tokens.json) - for token in $tokens; do - name=$(echo $token | jq -r .name) - value=$(echo $token | jq -r .token) - - kubectl patch --type=merge clusters $name -p "{\"status\": {\"serviceAccountToken\": \"$value\"}}" - done - ``` - Set the script to allow execution (`chmod +x apply_tokens.sh`) and execute the script as follows: - ``` - ./apply_tokens.sh - ``` - After a few moments the clusters will go from `Unavailable` back to `Available`. - -6. Continue using Rancher as normal. +Rolling back to Rancher v2.0-v2.1 is no longer supported. The instructions for rolling back to these versions are preserved [here]({{}}/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1) and are intended to be used only in cases where upgrading to Rancher v2.2+ is not feasible. \ No newline at end of file From cb54d367236d879e9bb6ab283bf1ea2df8ccfff1 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 9 Feb 2021 10:12:57 -0700 Subject: [PATCH 15/36] Make CPU and memory reqs generic for hosted K8s installs #2996 --- .../en/installation/requirements/_index.md | 46 +++++++++---------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/content/rancher/v2.x/en/installation/requirements/_index.md b/content/rancher/v2.x/en/installation/requirements/_index.md index c70a20aa7de..8d5622dcf05 100644 --- a/content/rancher/v2.x/en/installation/requirements/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/_index.md @@ -12,9 +12,12 @@ Make sure the node(s) for the Rancher server fulfill the following requirements: - [Operating Systems and Container Runtime Requirements](#operating-systems-and-container-runtime-requirements) - [Hardware Requirements](#hardware-requirements) - - [CPU and Memory](#cpu-and-memory) - - [CPU and Memory for Rancher prior to v2.4.0](#cpu-and-memory-for-rancher-prior-to-v2-4-0) - - [Disks](#disks) +- [CPU and Memory](#cpu-and-memory) + - [RKE and Hosted Kubernetes](#rke-and-hosted-kubernetes) + - [K3s Kubernetes](#k3s-kubernetes) + - [RancherD](#rancherd) + - [Rancher prior to v2.4.0](#rancher-prior-to-v2-4-0) +- [Disks](#disks) - [Networking Requirements](#networking-requirements) - [Node IP Addresses](#node-ip-addresses) - [Port Requirements](#port-requirements) @@ -72,16 +75,17 @@ Docker is required for Helm chart installs, and it can be installed by following Docker is not required for RancherD installs. # Hardware Requirements -This section describes the CPU, memory, and disk requirements for the nodes where the Rancher server is installed. +The following sections describe the CPU, memory, and disk requirements for the nodes where the Rancher server is installed. -### CPU and Memory +# CPU and Memory Hardware requirements scale based on the size of your Rancher deployment. Provision each individual node according to the requirements. The requirements are different depending on if you are installing Rancher in a single container with Docker, or if you are installing Rancher on a Kubernetes cluster. -{{% tabs %}} -{{% tab "RKE" %}} +### RKE and Hosted Kubernetes -These requirements apply to each host in an [RKE Kubernetes cluster where the Rancher server is installed.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/) +These CPU and memory requirements apply to each host in the Kubernetes cluster where the Rancher server is installed. + +These requirements apply to RKE Kubernetes clusters, as well as to hosted Kubernetes clusters such as EKS. Performance increased in Rancher v2.4.0. For the requirements of Rancher prior to v2.4.0, refer to [this section.](#cpu-and-memory-for-rancher-prior-to-v2-4-0) @@ -94,11 +98,10 @@ Performance increased in Rancher v2.4.0. For the requirements of Rancher prior t | XX-Large | Up to 2000 | Up to 20,000 | 32 | 128 GB | [Contact Rancher](https://rancher.com/contact/) for more than 2000 clusters and/or 20,000 nodes. -{{% /tab %}} -{{% tab "K3s" %}} +### K3s Kubernetes -These requirements apply to each host in a [K3s Kubernetes cluster where the Rancher server is installed.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/) +These CPU and memory requirements apply to each host in a [K3s Kubernetes cluster where the Rancher server is installed.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/) | Deployment Size | Clusters | Nodes | vCPUs | RAM | Database Size | | --------------- | ---------- | ------------ | -------| ---------| ------------------------- | @@ -110,37 +113,30 @@ These requirements apply to each host in a [K3s Kubernetes cluster where the Ran [Contact Rancher](https://rancher.com/contact/) for more than 2000 clusters and/or 20,000 nodes. -{{% /tab %}} - -{{% tab "RancherD" %}} +### RancherD _RancherD is available as of v2.5.4. It is an experimental feature._ -The following requirements apply to each instance with RancherD installed. Minimum recommendations are outlined here. +These CPU and memory requirements apply to each instance with RancherD installed. Minimum recommendations are outlined here. | Deployment Size | Clusters | Nodes | vCPUs | RAM | | --------------- | -------- | --------- | ----- | ---- | | Small | Up to 5 | Up to 50 | 2 | 5 GB | | Medium | Up to 15 | Up to 200 | 3 | 9 GB | -{{% /tab %}} +### Docker -{{% tab "Docker" %}} - -These requirements apply to a host with a [single-node]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) installation of Rancher. +These CPU and memory requirements apply to a host with a [single-node]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) installation of Rancher. | Deployment Size | Clusters | Nodes | vCPUs | RAM | | --------------- | -------- | --------- | ----- | ---- | | Small | Up to 5 | Up to 50 | 1 | 4 GB | | Medium | Up to 15 | Up to 200 | 2 | 8 GB | -{{% /tab %}} -{{% /tabs %}} - -### CPU and Memory for Rancher prior to v2.4.0 +### Rancher prior to v2.4.0 {{% accordion label="Click to expand" %}} -These requirements apply to installing Rancher on an RKE Kubernetes cluster prior to Rancher v2.4.0: +These CPU and memory requirements apply to installing Rancher on an RKE Kubernetes cluster prior to Rancher v2.4.0: | Deployment Size | Clusters | Nodes | vCPUs | RAM | | --------------- | --------- | ---------- | ----------------------------------------------- | ----------------------------------------------- | @@ -151,7 +147,7 @@ These requirements apply to installing Rancher on an RKE Kubernetes cluster prio | XX-Large | 100+ | 1000+ | [Contact Rancher](https://rancher.com/contact/) | [Contact Rancher](https://rancher.com/contact/) | {{% /accordion %}} -### Disks +# Disks Rancher performance depends on etcd in the cluster performance. To ensure optimal speed, we recommend always using SSD disks to back your Rancher management Kubernetes cluster. On cloud providers, you will also want to use the minimum size that allows the maximum IOPS. In larger clusters, consider using dedicated storage devices for etcd data and wal directories. From 79b48f7b5e1c29cc58c579c2de2cc63547d94733 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 9 Feb 2021 10:20:31 -0700 Subject: [PATCH 16/36] Fix internal link --- content/rancher/v2.x/en/installation/requirements/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/installation/requirements/_index.md b/content/rancher/v2.x/en/installation/requirements/_index.md index 8d5622dcf05..7768a463908 100644 --- a/content/rancher/v2.x/en/installation/requirements/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/_index.md @@ -16,7 +16,7 @@ Make sure the node(s) for the Rancher server fulfill the following requirements: - [RKE and Hosted Kubernetes](#rke-and-hosted-kubernetes) - [K3s Kubernetes](#k3s-kubernetes) - [RancherD](#rancherd) - - [Rancher prior to v2.4.0](#rancher-prior-to-v2-4-0) + - [CPU and Memory for Rancher prior to v2.4.0](#cpu-and-memory-for-rancher-prior-to-v2-4-0) - [Disks](#disks) - [Networking Requirements](#networking-requirements) - [Node IP Addresses](#node-ip-addresses) @@ -133,7 +133,7 @@ These CPU and memory requirements apply to a host with a [single-node]({{ Date: Tue, 9 Feb 2021 14:08:17 -0800 Subject: [PATCH 17/36] Fix typo --- .../v2.x/en/cluster-provisioning/registered-clusters/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/cluster-provisioning/registered-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/registered-clusters/_index.md index 25bb4babb45..7c040256ef7 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/registered-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/registered-clusters/_index.md @@ -3,7 +3,7 @@ title: Registering Existing Clusters weight: 6 --- -_Available of of v2.5_ +_Available as of v2.5_ The cluster registration feature replaced the feature to import clusters. From a9fcaf454056161d765649fec88daee5724c9239 Mon Sep 17 00:00:00 2001 From: Napsty Date: Wed, 10 Feb 2021 07:21:44 +0100 Subject: [PATCH 18/36] Update helm stable repository (issue 3017) --- .../en/installation/install-rancher-on-k8s/upgrades/_index.md | 2 +- .../install-rancher-on-k8s/upgrades/helm2/_index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md index e8c8fca4cd1..f622a84befd 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md @@ -107,7 +107,7 @@ You'll use the backup as a restoration point if something goes wrong during upgr helm repo list NAME URL - stable https://kubernetes-charts.storage.googleapis.com + stable https://charts.helm.sh/stable rancher- https://releases.rancher.com/server-charts/ ``` diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md index d7f7091919e..1fe92cee4b6 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md @@ -64,7 +64,7 @@ of your Kubernetes cluster running Rancher server. You'll use the snapshot as a helm repo list NAME URL - stable https://kubernetes-charts.storage.googleapis.com + stable https://charts.helm.sh/stable rancher- https://releases.rancher.com/server-charts/ ``` From 260117cb1ac327ba8cc14e50ec9965cd65aacc31 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 10 Feb 2021 17:43:15 -0700 Subject: [PATCH 19/36] Document steps to update a private CA #2980 --- .../resources/tls-secrets/_index.md | 6 +- .../resources/update-ca-cert/_index.md | 145 ++++++++++++++++++ 2 files changed, 150 insertions(+), 1 deletion(-) create mode 100644 content/rancher/v2.x/en/installation/resources/update-ca-cert/_index.md diff --git a/content/rancher/v2.x/en/installation/resources/tls-secrets/_index.md b/content/rancher/v2.x/en/installation/resources/tls-secrets/_index.md index 3c339f16c1a..da47ec64df6 100644 --- a/content/rancher/v2.x/en/installation/resources/tls-secrets/_index.md +++ b/content/rancher/v2.x/en/installation/resources/tls-secrets/_index.md @@ -23,7 +23,7 @@ kubectl -n cattle-system create secret tls tls-rancher-ingress \ > **Note:** If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use. -### Using a Private CA Signed Certificate +# Using a Private CA Signed Certificate If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server. @@ -35,3 +35,7 @@ kubectl -n cattle-system create secret generic tls-ca \ ``` > **Note:** The configured `tls-ca` secret is retrieved when Rancher starts. On a running Rancher installation the updated CA will take effect after new Rancher pods are started. + +# Updating a Private CA Certificate + +Follow the steps on [this page]({{}}/rancher/v2.x/en/installation/resources/update-ca-cert) to update the SSL certificate of the ingress in a Rancher [high availability Kubernetes installation]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/) or to switch from the default self-signed certificate to a custom certificate. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/resources/update-ca-cert/_index.md b/content/rancher/v2.x/en/installation/resources/update-ca-cert/_index.md new file mode 100644 index 00000000000..a46b54f8552 --- /dev/null +++ b/content/rancher/v2.x/en/installation/resources/update-ca-cert/_index.md @@ -0,0 +1,145 @@ +--- +title: Updating a Private CA Certificate +weight: 10 +--- + +Follow these steps to update the SSL certificate of the ingress in a Rancher [high availability Kubernetes installation]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/) or to switch from the default self-signed certificate to a custom certificate. + +A summary of the steps is as follows: + +1. Create or update the `tls-rancher-ingress` Kubernetes secret resource with the new certificate and private key. +2. Create or update the `tls-ca` Kubernetes secret resource with the root CA certificate (only required when using a private CA). +3. Update the Rancher installation using the Helm CLI. +4. Reconfigure the Rancher agents to trust the new CA certificate. + +The details of these instructions are below. + +# 1. Create/update the certificate secret resource + +First, concatenate the server certificate followed by any intermediate certificate(s) to a file named `tls.crt` and provide the corresponding certificate key in a file named `tls.key`. + +If you are switching the install from using the Rancher self-signed certificate or Let’s Encrypt issued certificates, use the following command to create the `tls-rancher-ingress` secret resource in your Rancher HA cluster: + +``` +$ kubectl -n cattle-system create secret tls tls-rancher-ingress \ + --cert=tls.crt \ + --key=tls.key +``` + +Alternatively, to update an existing certificate secret: + +``` +$ kubectl -n cattle-system create secret tls tls-rancher-ingress \ + --cert=tls.crt \ + --key=tls.key \ + --dry-run --save-config -o yaml | kubectl apply -f - +``` + +# 2. Create/update the CA certificate secret resource + +If the new certificate was signed by a private CA, you will need to copy the corresponding root CA certificate into a file named `cacerts.pem` and create or update the `tls-ca secret` in the `cattle-system` namespace. If the certificate was signed by an intermediate CA, then the `cacerts.pem` must contain both the intermediate and root CA certificates (in this order). + +To create the initial secret: + +``` +$ kubectl -n cattle-system create secret generic tls-ca \ + --from-file=cacerts.pem +``` + +To update an existing `tls-ca` secret: + +``` +$ kubectl -n cattle-system create secret generic tls-ca \ + --from-file=cacerts.pem \ + --dry-run --save-config -o yaml | kubectl apply -f - +``` + +# 3. Reconfigure the Rancher deployment + +> Before proceeding, generate an API token in the Rancher UI (User > API & Keys) and save the Bearer Token which you might need in step 4. + +This step is required if Rancher was initially installed with self-signed certificates (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`). + +It ensures that the Rancher pods and ingress resources are reconfigured to use the new server and optional CA certificate. + +To update the Helm deployment you will need to use the same (`--set`) options that were used during initial installation. Check with: + +``` +$ helm get values rancher -n cattle-system +``` + +Also get the version string of the currently deployed Rancher chart: + +``` +$ helm ls -A +``` + +Upgrade the Helm application instance using the original configuration values and making sure to specify `ingress.tls.source=secret` as well as the current chart version to prevent an application upgrade. + +If the certificate was signed by a private CA, add the `set privateCA=true` argument as well. Also make sure to read the documentation describing the initial installation using [custom certificates]({{}}/rancher/v2.x/en/installation/install-rancher-on-Kubernetes/#6-install-rancher-with-helm-and-your-chosen-certificate-option). + +``` +helm upgrade rancher rancher-stable/rancher \ + --namespace cattle-system \ + --version \ + --set hostname=rancher.my.org \ + --set ingress.tls.source=secret \ + --set ... +``` + +When the upgrade is completed, navigate to `https:///v3/settings/cacerts` to verify that the value matches the CA certificate written in the `tls-ca` secret earlier. + +# 4. Reconfigure Rancher agents to trust the private CA + +This section covers three methods to reconfigure Rancher agents to trust the private CA. This step is required if either of the following is true: + +- Rancher was initially configured to use the Rancher self-signed certificate (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`) +- The root CA certificate for the new custom certificate has changed + +### Why is this step required? + +When Rancher is configured with a certificate signed by a private CA, the CA certificate chain is downloaded into Rancher agent containers. Agents compare the checksum of the downloaded certificate against the `CATTLE_CA_CHECKSUM` environment variable. This means that, when the private CA certificate is changed on Rancher server side, the environvment variable `CATTLE_CA_CHECKSUM` must be updated accordingly. + +### Which method should I choose? + +Method 1 is the easiest one but requires all clusters to be connected to Rancher after the certificates have been rotated. This is usually the case if the process is performed right after updating the Rancher deployment (Step 3). + +If the clusters have lost connection to Rancher but you have [Authorized Cluster Endpoints](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/cluster-access/ace/) enabled, then go with method 2. + +Method 3 can be used as a fallback if method 1 and 2 are unfeasible. + +### Method 1: Kubectl command + +For each cluster under Rancher management (including `local`) run the following command using the Kubeconfig file of the Rancher management cluster (RKE or K3S). + +``` +kubectl patch clusters -p '{"status":{"agentImage":"dummy"}}' --type merge +``` + +This command will cause all Agent Kubernetes resources to be reconfigured with the checksum of the new certificate. + + +### Method 2: Manually update checksum + +Manually patch the agent Kubernetes resources by updating the `CATTLE_CA_CHECKSUM` environment variable to the value matching the checksum of the new CA certificate. Generate the new checksum value like so: + +``` +$ curl -k -s -fL /v3/settings/cacerts | jq -r .value > cacert.tmp +$ sha256sum cacert.tmp | awk '{print $1}' +``` + +Using a Kubeconfig for each downstream cluster update the environment variable for the two agent deployments. + +``` +$ kubectl edit -n cattle-system ds/cattle-node-agent +$ kubectl edit -n cattle-system deployment/cluster-agent +``` + +### Method 3: Recreate Rancher agents + +With this method you are recreating the Rancher agents by running a set of commands on a controlplane node of each downstream cluster. + +First, generate the agent definitions as described here: https://gist.github.com/superseb/076f20146e012f1d4e289f5bd1bd4971 + +Then, connect to a controlplane node of the downstream cluster via SSH, create a Kubeconfig and apply the definitions: +https://gist.github.com/superseb/b14ed3b5535f621ad3d2aa6a4cd6443b \ No newline at end of file From 22242fbe3a9c7226a791d4e943720216eda9d534 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 10 Feb 2021 18:07:05 -0700 Subject: [PATCH 20/36] Fix formatting --- .../install-rancher-on-k8s/upgrades/helm2/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md index 6fc638e71df..4037760c9aa 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md @@ -120,8 +120,8 @@ If you are currently running the cert-manger whose version is older than v0.11, ``` helm delete rancher ``` - -In case this results in an error that the release "rancher" was not found, make sure you are using the correct deployment name. Use `helm list` to list the helm-deployed releases. + + In case this results in an error that the release "rancher" was not found, make sure you are using the correct deployment name. Use `helm list` to list the helm-deployed releases. 2. Uninstall and reinstall `cert-manager` according to the instructions on the [Upgrading Cert-Manager]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions) page. From 896ac36dfb18bae7fdba45e6e1b7d5b8e8538949 Mon Sep 17 00:00:00 2001 From: Nelson Roberts Date: Thu, 11 Feb 2021 12:53:56 -0700 Subject: [PATCH 21/36] fix PDF link --- .../v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md index cb5f7d2156e..dfa71d7d3f1 100644 --- a/content/rancher/v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md @@ -13,7 +13,7 @@ This hardening guide is intended to be used for RKE clusters and associated with ----------------|-----------------------|------------------ Rancher v2.5 | Benchmark v1.5 | Kubernetes 1.15 -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_Hardening_Guide_CIS_1.5.pdf) +[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_Hardening_Guide_CIS_1.6.pdf) ### Overview From 8a45c64cc864c498919b44879192806d80f1d951 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 12 Feb 2021 18:24:41 -0700 Subject: [PATCH 22/36] Say RancherD is experimental in more places --- .../install-rancher-on-linux/rancherd-configuration/_index.md | 2 ++ .../installation/install-rancher-on-linux/rollbacks/_index.md | 2 ++ .../en/installation/install-rancher-on-linux/upgrades/_index.md | 2 ++ 3 files changed, 6 insertions(+) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-linux/rancherd-configuration/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-linux/rancherd-configuration/_index.md index 889232d8bf9..6cc91405735 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-linux/rancherd-configuration/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-linux/rancherd-configuration/_index.md @@ -3,6 +3,8 @@ title: RancherD Configuration Reference weight: 1 --- +> This is an experimental feature. + In RancherD, a server node is defined as a machine (bare-metal or virtual) running the `rancherd server` command. The server runs the Kubernetes API as well as Kubernetes workloads. An agent node is defined as a machine running the `rancherd agent` command. They don't run the Kubernetes API. To add nodes designated to run your apps and services, join agent nodes to your cluster. diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-linux/rollbacks/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-linux/rollbacks/_index.md index 5b3ed82e35c..20870c06ec7 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-linux/rollbacks/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-linux/rollbacks/_index.md @@ -3,4 +3,6 @@ title: Rollbacks weight: 3 --- +> RancherD is an experimental feature. + To roll back Rancher to a previous version, re-run the installation script with the previous version specified in the `INSTALL_RANCHERD_VERSION` environment variable. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-linux/upgrades/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-linux/upgrades/_index.md index 6244161d137..623576dc06c 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-linux/upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-linux/upgrades/_index.md @@ -3,6 +3,8 @@ title: Upgrades weight: 2 --- +> RancherD is an experimental feature. + When RancherD is upgraded, the Rancher Helm controller and the Fleet pods are upgraded. During a RancherD upgrade, there is very little downtime, but it is possible that RKE2 may be down for a minute, during which you could lose access to Rancher. From e1f70a86736be74c333e796b7f9fc1cfeed46442 Mon Sep 17 00:00:00 2001 From: Martin-Weiss Date: Tue, 16 Feb 2021 15:48:00 +0100 Subject: [PATCH 23/36] Update _index.md Change typo rancher-backup to "Rancher Backups" --- content/rancher/v2.x/en/backups/v2.5/back-up-rancher/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/backups/v2.5/back-up-rancher/_index.md b/content/rancher/v2.x/en/backups/v2.5/back-up-rancher/_index.md index 061cbcdf8e2..1ca22a01fa1 100644 --- a/content/rancher/v2.x/en/backups/v2.5/back-up-rancher/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/back-up-rancher/_index.md @@ -19,7 +19,7 @@ Backups are created as .tar.gz files. These files can be pushed to S3 or Minio, 1. In the Rancher UI, go to the **Cluster Explorer.** 1. Click **Apps.** -1. Click `rancher-backup`. +1. Click `Rancher Backups`. 1. Configure the default storage location. For help, refer to the [storage configuration section.](../configuration/storage-config) ### 2. Perform a Backup From 6c76521af794a7504e821e817355c824da6cd5f2 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 17 Feb 2021 10:55:46 -0700 Subject: [PATCH 24/36] Update link to proxy install docs #2978 --- content/rancher/v2.x/en/installation/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/installation/_index.md b/content/rancher/v2.x/en/installation/_index.md index cb43a052f17..052e4ed42d4 100644 --- a/content/rancher/v2.x/en/installation/_index.md +++ b/content/rancher/v2.x/en/installation/_index.md @@ -67,7 +67,7 @@ There are also separate instructions for installing Rancher in an air gap enviro | Level of Internet Access | Kubernetes Installation - Strongly Recommended | Docker Installation | | ---------------------------------- | ------------------------------ | ---------- | | With direct access to the Internet | [Docs]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/) | [Docs]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) | -| Behind an HTTP proxy | These [docs,]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/) plus this [configuration]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/chart-options/#http-proxy) | These [docs,]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) plus this [configuration]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/) | +| Behind an HTTP proxy | [Docs]({{}}/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/) | These [docs,]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) plus this [configuration]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/) | | In an air gap environment | [Docs]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap) | [Docs]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap) | We recommend installing Rancher on a Kubernetes cluster, because in a multi-node cluster, the Rancher management server becomes highly available. This high-availability configuration helps maintain consistent access to the downstream Kubernetes clusters that Rancher will manage. From 1ad3c8d65397005e4982fe3562926c04fb084346 Mon Sep 17 00:00:00 2001 From: Lior Kesos Date: Thu, 18 Feb 2021 00:06:40 +0200 Subject: [PATCH 25/36] Update _index.md didnt close the " in the es-flow --- content/rancher/v2.x/en/logging/v2.5/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/logging/v2.5/_index.md b/content/rancher/v2.x/en/logging/v2.5/_index.md index 01771387ab4..5a839ff1d70 100644 --- a/content/rancher/v2.x/en/logging/v2.5/_index.md +++ b/content/rancher/v2.x/en/logging/v2.5/_index.md @@ -115,7 +115,7 @@ metadata: namespace: "cattle-logging-system" spec: globalOutputRefs: - - "example-es + - "example-es" ``` We should now see our configured index with logs in it. From 5a9e1f73c1b40162b8b0f1c73b0945b0935d707f Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 17 Feb 2021 17:14:44 -0800 Subject: [PATCH 26/36] Replace 'prior to' with 'before' Use wording understood by wider audience --- content/k3s/latest/en/advanced/_index.md | 2 +- .../v1.x/en/installation/amazon-ecs/_index.md | 2 +- .../drivers/node-drivers/_index.md | 4 +-- .../rbac/default-custom-roles/_index.md | 2 +- .../rbac/global-permissions/_index.md | 2 +- .../rancher/v2.x/en/backups/v2.5/_index.md | 6 ++-- .../v2.5/rancher-managed/logging/_index.md | 2 +- .../en/cis-scans/v2.4/skipped-tests/_index.md | 4 +-- .../en/cis-scans/v2.5/skipped-tests/_index.md | 2 +- .../cluster-admin/backing-up-etcd/_index.md | 6 ++-- .../v2.x/en/cluster-admin/nodes/_index.md | 4 +-- .../en/cluster-admin/restoring-etcd/_index.md | 6 ++-- .../upgrading-kubernetes/_index.md | 2 +- .../attaching-existing-storage/_index.md | 2 +- .../how-storage-works/_index.md | 2 +- .../provisioning-new-storage/_index.md | 2 +- .../hosted-kubernetes-clusters/eks/_index.md | 4 +-- .../cloud-providers/vsphere/_index.md | 2 +- .../rke-clusters/node-pools/azure/_index.md | 2 +- .../azure-node-template-config/_index.md | 2 +- .../node-pools/digital-ocean/_index.md | 2 +- .../do-node-template-config/_index.md | 2 +- .../rke-clusters/node-pools/ec2/_index.md | 2 +- .../ec2/ec2-node-template-config/_index.md | 2 +- .../rke-clusters/node-pools/vsphere/_index.md | 2 +- .../provisioning-vsphere-clusters/_index.md | 8 ++--- .../vsphere-node-template-config/_index.md | 2 +- .../prior-to-2.0.4/_index.md | 4 +-- .../rke-clusters/options/_index.md | 2 +- .../docs-for-2.1-and-2.2/_index.md | 2 +- .../v2.x/en/deploy-across-clusters/_index.md | 2 +- content/rancher/v2.x/en/helm-charts/_index.md | 2 +- .../en/helm-charts/legacy-catalogs/_index.md | 2 +- .../legacy-catalogs/adding-catalogs/_index.md | 4 +-- .../legacy-catalogs/built-in/_index.md | 4 +-- .../legacy-catalogs/globaldns/_index.md | 2 +- .../legacy-catalogs/launching-apps/_index.md | 4 +-- .../legacy-catalogs/managing-apps/_index.md | 8 ++--- .../rancher/v2.x/en/installation/_index.md | 2 +- .../install-rancher-on-k8s/_index.md | 2 +- .../rollbacks/_index.md | 2 +- .../upgrades/namespace-migration/_index.md | 2 +- .../other-installation-methods/_index.md | 2 +- .../air-gap/install-rancher/_index.md | 10 +++--- .../air-gap/launch-kubernetes/_index.md | 2 +- .../single-node-docker/_index.md | 2 +- .../single-node-rollbacks/_index.md | 4 +-- .../single-node-upgrades/_index.md | 2 +- .../en/installation/requirements/_index.md | 8 ++--- .../installation/requirements/ports/_index.md | 2 +- .../air-gap-helm2/install-rancher/_index.md | 10 +++--- .../advanced/api-audit-log/_index.md | 2 +- .../resources/choosing-version/_index.md | 2 +- .../resources/k8s-tutorials/_index.md | 2 +- .../resources/k8s-tutorials/ha-RKE/_index.md | 2 +- .../resources/local-system-charts/_index.md | 4 +-- .../selectors-and-scrape/_index.md | 2 +- .../en/k8s-in-rancher/certificates/_index.md | 4 +-- .../horitzontal-pod-autoscaler/_index.md | 4 +-- .../hpa-for-rancher-before-2_0_7/_index.md | 2 +- .../manage-hpa-with-kubectl/_index.md | 2 +- .../ingress/_index.md | 2 +- .../en/k8s-in-rancher/registries/_index.md | 4 +-- .../service-discovery/_index.md | 2 +- .../workloads/add-a-sidecar/_index.md | 2 +- .../workloads/deploy-workloads/_index.md | 2 +- .../v2.x/en/logging/v2.0.x-v2.4.x/_index.md | 2 +- .../v2.0.x-v2.4.x/project-logging/_index.md | 2 +- content/rancher/v2.x/en/longhorn/_index.md | 2 +- .../v2.0.x-v2.4.x/_index.md | 2 +- .../cluster-alerts/project-alerts/_index.md | 4 +-- .../cluster-metrics/_index.md | 2 +- .../project-monitoring/_index.md | 2 +- .../viewing-metrics/_index.md | 6 ++-- .../en/monitoring-alerting/v2.5/_index.md | 2 +- .../v2.5/migrating/_index.md | 6 ++-- .../architecture-recommendations/_index.md | 2 +- .../v2.x/en/overview/architecture/_index.md | 2 +- content/rancher/v2.x/en/pipelines/_index.md | 12 +++---- .../v2.x/en/pipelines/config/_index.md | 12 +++---- .../en/pipelines/docs-for-v2.0.x/_index.md | 2 +- .../v2.x/en/pipelines/example-repos/_index.md | 6 ++-- .../v2.x/en/pipelines/storage/_index.md | 4 +-- .../override-container-default/_index.md | 2 +- .../_index.md | 4 +-- .../_index.md | 2 +- .../rancher-v2.3.5/hardening-2.3.5/_index.md | 2 +- .../rancher-2.4/hardening-2.4/_index.md | 2 +- .../rancher-2.5/1.5-hardening-2.5/_index.md | 2 +- .../rancher-2.5/1.6-hardening-2.5/_index.md | 2 +- .../discover-services/_index.md | 2 +- .../v1.6-migration/load-balancing/_index.md | 4 +-- .../run-migration-tool/_index.md | 2 +- .../schedule-workloads/_index.md | 2 +- .../en/config-options/add-ons/_index.md | 2 +- .../add-ons/ingress-controllers/_index.md | 2 +- .../add-ons/user-defined-add-ons/_index.md | 2 +- .../vsphere/enabling-uuid/_index.md | 2 +- .../latest/en/config-options/nodes/_index.md | 2 +- .../private-registries/_index.md | 2 +- .../services/services-extras/_index.md | 4 +-- .../en/config-options/system-images/_index.md | 2 +- .../example-scenarios/_index.md | 34 +++++++++---------- .../one-time-snapshots/_index.md | 2 +- .../recurring-snapshots/_index.md | 8 ++--- .../restoring-from-backup/_index.md | 2 +- content/rke/latest/en/installation/_index.md | 2 +- content/rke/latest/en/upgrades/_index.md | 6 ++-- .../en/upgrades/how-upgrades-work/_index.md | 2 +- 109 files changed, 187 insertions(+), 187 deletions(-) diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index 633399b6614..f48955149c4 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -356,7 +356,7 @@ The `--disable-selinux` option should not be used. It is deprecated and will be Using a custom `--data-dir` under SELinux is not supported. To customize it, you would most likely need to write your own custom policy. For guidance, you could refer to the [containers/container-selinux](https://github.com/containers/container-selinux) repository, which contains the SELinux policy files for Container Runtimes, and the [rancher/k3s-selinux](https://github.com/rancher/k3s-selinux) repository, which contains the SELinux policy for K3s . {{%/tab%}} -{{% tab "K3s prior to v1.19.1+k3s1" %}} +{{% tab "K3s before v1.19.1+k3s1" %}} SELinux is automatically enabled for the built-in containerd. diff --git a/content/os/v1.x/en/installation/amazon-ecs/_index.md b/content/os/v1.x/en/installation/amazon-ecs/_index.md index bc642bee97d..1379784c5bf 100644 --- a/content/os/v1.x/en/installation/amazon-ecs/_index.md +++ b/content/os/v1.x/en/installation/amazon-ecs/_index.md @@ -7,7 +7,7 @@ weight: 190 ### Pre-Requisites -Prior to launching RancherOS EC2 instances, the [ECS Container Instance IAM Role](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) will need to have been created. This `ecsInstanceRole` will need to be used when launching EC2 instances. If you have been using ECS, you created this role if you followed the ECS "Get Started" interactive guide. +Before launching RancherOS EC2 instances, the [ECS Container Instance IAM Role](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) will need to have been created. This `ecsInstanceRole` will need to be used when launching EC2 instances. If you have been using ECS, you created this role if you followed the ECS "Get Started" interactive guide. ### Launching an instance with ECS diff --git a/content/rancher/v2.x/en/admin-settings/drivers/node-drivers/_index.md b/content/rancher/v2.x/en/admin-settings/drivers/node-drivers/_index.md index 2d88d7ac970..7d2154acada 100644 --- a/content/rancher/v2.x/en/admin-settings/drivers/node-drivers/_index.md +++ b/content/rancher/v2.x/en/admin-settings/drivers/node-drivers/_index.md @@ -21,7 +21,7 @@ If there are specific node drivers that you don't want to show to your users, yo By default, Rancher only activates drivers for the most popular cloud providers, Amazon EC2, Azure, DigitalOcean and vSphere. If you want to show or hide any node driver, you can change its status. -1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version prior to v2.2.0, you can select **Node Drivers** directly in the navigation bar. +1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version before v2.2.0, you can select **Node Drivers** directly in the navigation bar. 2. Select the driver that you wish to **Activate** or **Deactivate** and select the appropriate icon. @@ -29,7 +29,7 @@ By default, Rancher only activates drivers for the most popular cloud providers, If you want to use a node driver that Rancher doesn't support out-of-the-box, you can add that provider's driver in order to start using them to create node templates and eventually node pools for your Kubernetes cluster. -1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version prior to v2.2.0, you can select **Node Drivers** directly in the navigation bar. +1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version before v2.2.0, you can select **Node Drivers** directly in the navigation bar. 2. Click **Add Node Driver**. diff --git a/content/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/_index.md b/content/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/_index.md index 02b97993b5a..9b5b4ea0154 100644 --- a/content/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/_index.md @@ -61,7 +61,7 @@ The steps to add custom roles differ depending on the version of Rancher. 1. Click **Create**. {{% /tab %}} -{{% tab "Rancher prior to v2.0.7" %}} +{{% tab "Rancher before v2.0.7" %}} 1. From the **Global** view, select **Security > Roles** from the main menu. diff --git a/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md b/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md index 09b276cee5f..30a255d33d2 100644 --- a/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md @@ -55,7 +55,7 @@ The `restricted-admin` permissions are as follows: ### Upgrading from Rancher with a Hidden Local Cluster -Prior to Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster: +Before Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster: ``` --add-local=false diff --git a/content/rancher/v2.x/en/backups/v2.5/_index.md b/content/rancher/v2.x/en/backups/v2.5/_index.md index a20fe5e4f83..2c371b910c8 100644 --- a/content/rancher/v2.x/en/backups/v2.5/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/_index.md @@ -14,7 +14,7 @@ The Rancher version must be v2.5.0 and up to use this approach of backing up and - [Changes in Rancher v2.5](#changes-in-rancher-v2-5) - [Backup and Restore for Rancher v2.5 installed with Docker](#backup-and-restore-for-rancher-v2-5-installed-with-docker) - - [Backup and Restore for Rancher installed on a Kubernetes Cluster Prior to v2.5](#backup-and-restore-for-rancher-installed-on-a-kubernetes-cluster-prior-to-v2-5) + - [Backup and Restore for Rancher installed on a Kubernetes Cluster Before v2.5](#backup-and-restore-for-rancher-installed-on-a-kubernetes-cluster-before-v2-5) - [How Backups and Restores Work](#how-backups-and-restores-work) - [Installing the rancher-backup Operator](#installing-the-rancher-backup-operator) - [Installing rancher-backup with the Rancher UI](#installing-rancher-backup-with-the-rancher-ui) @@ -40,9 +40,9 @@ In Rancher v2.5, it is now supported to install Rancher hosted Kubernetes cluste For Rancher installed with Docker, refer to the same steps used up till 2.5 for [backups](./docker-installs/docker-backups) and [restores.](./docker-installs/docker-backups) -### Backup and Restore for Rancher installed on a Kubernetes Cluster Prior to v2.5 +### Backup and Restore for Rancher installed on a Kubernetes Cluster Before v2.5 -For Rancher prior to v2.5, the way that Rancher is backed up and restored differs based on the way that Rancher was installed. Our legacy backup and restore documentation is here: +For Rancher before v2.5, the way that Rancher is backed up and restored differs based on the way that Rancher was installed. Our legacy backup and restore documentation is here: - For Rancher installed on an RKE Kubernetes cluster, refer to the legacy [backup]({{}}/rancher/v2.x/en/backups/legacy/backup/ha-backups) and [restore]({{}}/rancher/v2.x/en/backups/legacy/restore/rke-restore) documentation. - For Rancher installed on a K3s Kubernetes cluster, refer to the legacy [backup]({{}}/rancher/v2.x/en/backups/legacy/backup/k3s-backups) and [restore]({{}}/rancher/v2.x/en/backups/legacy/restore/k3s-restore) documentation. diff --git a/content/rancher/v2.x/en/best-practices/v2.5/rancher-managed/logging/_index.md b/content/rancher/v2.x/en/best-practices/v2.5/rancher-managed/logging/_index.md index 32da6bbc48d..3e9cf02ad2e 100644 --- a/content/rancher/v2.x/en/best-practices/v2.5/rancher-managed/logging/_index.md +++ b/content/rancher/v2.x/en/best-practices/v2.5/rancher-managed/logging/_index.md @@ -11,7 +11,7 @@ In this guide, we recommend best practices for cluster-level logging and applica # Changes in Logging in Rancher v2.5 -Prior to Rancher v2.5, logging in Rancher has historically been a pretty static integration. There were a fixed list of aggregators to choose from (ElasticSearch, Splunk, Kafka, Fluentd and Syslog), and only two configuration points to choose (Cluster-level and Project-level). +Before Rancher v2.5, logging in Rancher has historically been a pretty static integration. There were a fixed list of aggregators to choose from (ElasticSearch, Splunk, Kafka, Fluentd and Syslog), and only two configuration points to choose (Cluster-level and Project-level). Logging in 2.5 has been completely overhauled to provide a more flexible experience for log aggregation. With the new logging feature, administrators and users alike can deploy logging that meets fine-grained collection criteria while offering a wider array of destinations and configuration options. diff --git a/content/rancher/v2.x/en/cis-scans/v2.4/skipped-tests/_index.md b/content/rancher/v2.x/en/cis-scans/v2.4/skipped-tests/_index.md index f35347c3eaa..d313533041e 100644 --- a/content/rancher/v2.x/en/cis-scans/v2.4/skipped-tests/_index.md +++ b/content/rancher/v2.x/en/cis-scans/v2.4/skipped-tests/_index.md @@ -23,7 +23,7 @@ All the tests that are skipped and not applicable on this page will be counted a | 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | | 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. | | 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. | +| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. | | 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | | 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. | | 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | @@ -79,7 +79,7 @@ Number | Description | Reason for Skipping 1.7.3 | "Do not admit containers wishing to share the host IPC namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. 1.7.4 | "Do not admit containers wishing to share the host network namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. 1.7.5 | " Do not admit containers with allowPrivilegeEscalation (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. -2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. +2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required before provisioning the cluster in order for this argument to be set to true. 2.1.10 | "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. ### CIS Benchmark v1.4 Not Applicable Tests diff --git a/content/rancher/v2.x/en/cis-scans/v2.5/skipped-tests/_index.md b/content/rancher/v2.x/en/cis-scans/v2.5/skipped-tests/_index.md index 6c79a7627a9..2fb1461e9c6 100644 --- a/content/rancher/v2.x/en/cis-scans/v2.5/skipped-tests/_index.md +++ b/content/rancher/v2.x/en/cis-scans/v2.5/skipped-tests/_index.md @@ -20,7 +20,7 @@ This section lists the tests that are skipped in the permissive test profile for | 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | | 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. | | 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. | +| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. | | 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | | 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. | | 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | diff --git a/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md b/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md index e73b9b11d8f..df3b68815e9 100644 --- a/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md @@ -42,7 +42,7 @@ Because the Kubernetes version is now included in the snapshot, it is possible t The multiple components of the snapshot allow you to select from the following options if you need to restore a cluster from a snapshot: -- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0. +- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0. - **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes. - **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading. @@ -85,7 +85,7 @@ On restore, the following process is used: 5. The cluster is restored and post-restore actions will be done in the cluster. {{% /tab %}} -{{% tab "Rancher prior to v2.4.0" %}} +{{% tab "Rancher before v2.4.0" %}} When Rancher creates a snapshot, only the etcd data is included in the snapshot. Because the Kubernetes version is not included in the snapshot, there is no option to restore a cluster to a different Kubernetes version. @@ -217,4 +217,4 @@ This option is not available directly in the UI, and is only available through t # Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0 -If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). +If you have any Rancher launched Kubernetes clusters that were created before v2.2.0, after upgrading Rancher, you must [edit the cluster]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots before v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). diff --git a/content/rancher/v2.x/en/cluster-admin/nodes/_index.md b/content/rancher/v2.x/en/cluster-admin/nodes/_index.md index 715cf6f9951..668fa0e58a0 100644 --- a/content/rancher/v2.x/en/cluster-admin/nodes/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/nodes/_index.md @@ -144,7 +144,7 @@ There are two drain modes: aggressive and safe. If a node has standalone pods or ephemeral data it will be cordoned but not drained. {{% /tab %}} -{{% tab "Rancher prior to v2.2.x" %}} +{{% tab "Rancher before v2.2.x" %}} The following list describes each drain option: @@ -170,7 +170,7 @@ The timeout given to each pod for cleaning things up, so they will have chance t The amount of time drain should continue to wait before giving up. ->**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node prior to Kubernetes 1.12. +>**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node before Kubernetes 1.12. ### Drained and Cordoned State diff --git a/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md b/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md index 0396053a967..2215dc4b2dc 100644 --- a/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md @@ -37,7 +37,7 @@ Restores changed in Rancher v2.4.0. Snapshots are composed of the cluster data in etcd, the Kubernetes version, and the cluster configuration in the `cluster.yml.` These components allow you to select from the following options when restoring a cluster from a snapshot: -- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0. +- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0. - **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes. - **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading. @@ -58,7 +58,7 @@ When rolling back to a prior Kubernetes version, the [upgrade strategy options]( **Result:** The cluster will go into `updating` state and the process of restoring the `etcd` nodes from the snapshot will start. The cluster is restored when it returns to an `active` state. {{% /tab %}} -{{% tab "Rancher prior to v2.4.0" %}} +{{% tab "Rancher before v2.4.0" %}} > **Prerequisites:** > @@ -110,4 +110,4 @@ If the group of etcd nodes loses quorum, the Kubernetes cluster will report a fa # Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0 -If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). +If you have any Rancher launched Kubernetes clusters that were created before v2.2.0, after upgrading Rancher, you must [edit the cluster]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots before v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). diff --git a/content/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/_index.md b/content/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/_index.md index 991ea4c7b50..c49e3df53ff 100644 --- a/content/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/_index.md @@ -54,7 +54,7 @@ When upgrading the Kubernetes version of a cluster, we recommend that you: The restore operation will work on a cluster that is not in a healthy or active state. {{% /tab %}} -{{% tab "Rancher prior to v2.4" %}} +{{% tab "Rancher before v2.4" %}} When upgrading the Kubernetes version of a cluster, we recommend that you: 1. Take a snapshot. diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md index eb6fe652330..58670b543bc 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md @@ -57,7 +57,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo 1. Go to the project containing a workload that you want to add a persistent volume claim to. -1. Then click the **Volumes** tab and click **Add Volume**. (In versions prior to v2.3.0, click **Workloads** on the main navigation bar, then **Volumes.**) +1. Then click the **Volumes** tab and click **Add Volume**. (In versions before v2.3.0, click **Workloads** on the main navigation bar, then **Volumes.**) 1. Enter a **Name** for the volume claim. diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/_index.md index a2565bd2b5b..fcb87bc1029 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/_index.md @@ -34,7 +34,7 @@ Persistent volume claims (PVCs) are objects that request storage resources from To access persistent storage, a pod must have a PVC mounted as a volume. This PVC lets your deployment application store its data in an external location, so that if a pod fails, it can be replaced with a new pod and continue accessing its data stored externally, as though an outage never occurred. -Each Rancher project contains a list of PVCs that you've created, available from **Resources > Workloads > Volumes.** (In versions prior to v2.3.0, the PVCs are in the **Volumes** tab.) You can reuse these PVCs when creating deployments in the future. +Each Rancher project contains a list of PVCs that you've created, available from **Resources > Workloads > Volumes.** (In versions before v2.3.0, the PVCs are in the **Volumes** tab.) You can reuse these PVCs when creating deployments in the future. ### PVCs are Required for Both New and Existing Persistent Storage diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md index 02fb8e6672f..cb76fdcb2e8 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md @@ -66,7 +66,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo 1. Go to the **Cluster Manager** to the project containing a workload that you want to add a PVC to. -1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**. +1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**. 1. Enter a **Name** for the volume claim. diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md index e5069253cd4..cb7c57d6109 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md @@ -218,7 +218,7 @@ Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/u | Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. | {{% /tab %}} -{{% tab "Rancher prior to v2.5" %}} +{{% tab "Rancher before v2.5" %}} ### Account Access @@ -360,7 +360,7 @@ Service Role | The service role provides Kubernetes the permissions it requires VPC | Provides isolated network resources utilised by EKS and worker nodes. Rancher can create the VPC resources with the following [VPC Permissions]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#vpc-permissions). -Resource targeting uses `*` as the ARN of many of the resources created cannot be known prior to creating the EKS cluster in Rancher. +Resource targeting uses `*` as the ARN of many of the resources created cannot be known before creating the EKS cluster in Rancher. ```json { diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/_index.md index c9dd3762981..2ecc8a4e6a4 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/_index.md @@ -12,7 +12,7 @@ Follow these steps while creating the vSphere cluster in Rancher: {{< img "/img/rancher/vsphere-node-driver-cloudprovider.png" "vsphere-node-driver-cloudprovider">}} 1. Click on **Edit as YAML** -1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions prior to v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`. +1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions before v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`. ```yaml rancher_kubernetes_engine_config: # Required as of Rancher v2.3+ diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md index a71e7e3a906..251393573da 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md @@ -88,7 +88,7 @@ You can access your cluster after its state is updated to **Active.** - `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces {{% /tab %}} -{{% tab "Rancher prior to v2.2.0" %}} +{{% tab "Rancher before v2.2.0" %}} Use Rancher to create a Kubernetes cluster in Azure. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/_index.md index a9fd0d1fb09..1c2db8c79cf 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/_index.md @@ -22,7 +22,7 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d - **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/) {{% /tab %}} -{{% tab "Rancher prior to v2.2.0" %}} +{{% tab "Rancher before v2.2.0" %}} - **Account Access** stores your account information for authenticating with Azure. - **Placement** sets the geographical region where your cluster is hosted and other location metadata. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md index 3a26d0f6911..76aacc91d64 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md @@ -58,7 +58,7 @@ You can access your cluster after its state is updated to **Active.** - `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces {{% /tab %}} -{{% tab "Rancher prior to v2.2.0" %}} +{{% tab "Rancher before v2.2.0" %}} 1. From the **Clusters** page, click **Add Cluster**. 1. Choose **DigitalOcean**. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/_index.md index 9e2ad91e795..4d9a0066f42 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/_index.md @@ -21,7 +21,7 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d - **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon - **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/) {{% /tab %}} -{{% tab "Rancher prior to v2.2.0" %}} +{{% tab "Rancher before v2.2.0" %}} ### Access Token diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md index 4dc2868dfe5..f6efca2bc6a 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md @@ -76,7 +76,7 @@ You can access your cluster after its state is updated to **Active.** - `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces {{% /tab %}} -{{% tab "Rancher prior to v2.2.0" %}} +{{% tab "Rancher before v2.2.0" %}} 1. From the **Clusters** page, click **Add Cluster**. 1. Choose **Amazon EC2**. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md index e7c1859c4f3..9b8089cbf1d 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md @@ -49,7 +49,7 @@ If you need to pass an **IAM Instance Profile Name** (not ARN), for example, whe In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror. {{% /tab %}} -{{% tab "Rancher prior to v2.2.0" %}} +{{% tab "Rancher before v2.2.0" %}} ### Account Access diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md index e027990cd8f..6db13f971c2 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md @@ -43,7 +43,7 @@ For the fields to be populated, your setup needs to fulfill the [prerequisites.] In Rancher v2.3.3+, you can provision VMs with any operating system that supports `cloud-init`. Only YAML format is supported for the [cloud config.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) -In Rancher prior to v2.3.3, the vSphere node driver included in Rancher only supported the provisioning of VMs with [RancherOS]({{}}/os/v1.x/en/) as the guest operating system. +In Rancher before v2.3.3, the vSphere node driver included in Rancher only supported the provisioning of VMs with [RancherOS]({{}}/os/v1.x/en/) as the guest operating system. ### Video Walkthrough of v2.3.3 Node Template Features diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md index c56461fc35f..079ef5e5e15 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md @@ -33,7 +33,7 @@ Refer to this [how-to guide]({{}}/rancher/v2.x/en/cluster-provisioning/ It must be ensured that the hosts running the Rancher server are able to establish the following network connections: - To the vSphere API on the vCenter server (usually port 443/TCP). -- To the Host API (port 443/TCP) on all ESXi hosts used to instantiate virtual machines for the clusters (*only required with Rancher prior to v2.3.3 or when using the ISO creation method in later versions*). +- To the Host API (port 443/TCP) on all ESXi hosts used to instantiate virtual machines for the clusters (*only required with Rancher before v2.3.3 or when using the ISO creation method in later versions*). - To port 22/TCP and 2376/TCP on the created VMs See [Node Networking Requirements]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements) for a detailed list of port requirements applicable for creating nodes on an infrastructure provider. @@ -102,11 +102,11 @@ You can access your cluster after its state is updated to **Active.** - `Default`, containing the `default` namespace - `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces {{% /tab %}} -{{% tab "Rancher prior to v2.2.0" %}} +{{% tab "Rancher before v2.2.0" %}} Use Rancher to create a Kubernetes cluster in vSphere. -For Rancher versions prior to v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/prior-to-2.0.4/#disk-uuids) to enable disk UUIDs. +For Rancher versions before v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/before-2.0.4/#disk-uuids) to enable disk UUIDs. 1. From the **Clusters** page, click **Add Cluster**. 1. Choose **vSphere**. @@ -116,7 +116,7 @@ For Rancher versions prior to v2.0.4, when you create the cluster, you will also 1. If you want to dynamically provision persistent storage or other infrastructure later, you will need to enable the vSphere cloud provider by modifying the cluster YAML file. For details, refer to [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) 1. Add one or more [node pools]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) to your cluster. Each node pool uses a node template to provision new nodes. To create a node template, click **Add Node Template** and complete the **vSphere Options** form. For help filling out the form, refer to the vSphere node template configuration reference. Refer to the newest version of the configuration reference that is less than or equal to your Rancher version: - [v2.0.4]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.0.4) - - [prior to v2.0.4]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4) + - [before v2.0.4]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/before-2.0.4) 1. Review your options to confirm they're correct. Then click **Create** to start provisioning the VMs and Kubernetes services. **Result:** diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md index 71f5b3d573d..b3d79629b4d 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md @@ -13,4 +13,4 @@ The vSphere node templates in Rancher were updated in the following Rancher vers - [v2.2.0](./v2.2.0) - [v2.0.4](./v2.0.4) -For Rancher versions prior to v2.0.4, refer to [this version.](./prior-to-2.0.4) \ No newline at end of file +For Rancher versions before v2.0.4, refer to [this version.](./before-2.0.4) \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/_index.md index b1683f18d2e..9801050ad1d 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/_index.md @@ -1,6 +1,6 @@ --- -title: vSphere Node Template Configuration in Rancher prior to v2.0.4 -shortTitle: Prior to v2.0.4 +title: vSphere Node Template Configuration in Rancher before v2.0.4 +shortTitle: Before v2.0.4 weight: 5 --- diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md index 79f17ca15fe..b5bc0a0a5d6 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md @@ -267,7 +267,7 @@ windows_prefered_cluster: false An example cluster config file is included below. -{{% accordion id="prior-to-v2.3.0-cluster-config-file" label="Example Cluster Config File for Rancher v2.0.0-v2.2.x" %}} +{{% accordion id="before-v2.3.0-cluster-config-file" label="Example Cluster Config File for Rancher v2.0.0-v2.2.x" %}} ```yaml addon_job_timeout: 30 authentication: diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/_index.md index 3be1b324b65..62c3da2ca01 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/_index.md @@ -11,7 +11,7 @@ When you create a [custom cluster]({{}}/rancher/v2.x/en/cluster-provisi You can provision a custom Windows cluster using Rancher by using a mix of Linux and Windows hosts as your cluster nodes. ->**Important:** In versions of Rancher prior to v2.3, support for Windows nodes is experimental. Therefore, it is not recommended to use Windows nodes for production environments if you are using Rancher prior to v2.3. +>**Important:** In versions of Rancher before v2.3, support for Windows nodes is experimental. Therefore, it is not recommended to use Windows nodes for production environments if you are using Rancher before v2.3. This guide walks you through create of a custom cluster that includes three nodes: diff --git a/content/rancher/v2.x/en/deploy-across-clusters/_index.md b/content/rancher/v2.x/en/deploy-across-clusters/_index.md index da1d2ef7317..3f8c114c411 100644 --- a/content/rancher/v2.x/en/deploy-across-clusters/_index.md +++ b/content/rancher/v2.x/en/deploy-across-clusters/_index.md @@ -15,4 +15,4 @@ Fleet is GitOps at scale. For more information, refer to the [Fleet section.](./ ### Multi-cluster Apps -In Rancher prior to v2.5, the multi-cluster apps feature was used to deploy applications across clusters. Refer to the documentation [here.](./multi-cluster-apps) \ No newline at end of file +In Rancher before v2.5, the multi-cluster apps feature was used to deploy applications across clusters. Refer to the documentation [here.](./multi-cluster-apps) \ No newline at end of file diff --git a/content/rancher/v2.x/en/helm-charts/_index.md b/content/rancher/v2.x/en/helm-charts/_index.md index b2c92e8c114..d517e9e4fa6 100644 --- a/content/rancher/v2.x/en/helm-charts/_index.md +++ b/content/rancher/v2.x/en/helm-charts/_index.md @@ -9,4 +9,4 @@ In Rancher v2.5, the [apps and marketplace feature](./apps-marketplace) is used ### Catalogs -In Rancher prior to v2.5, the [catalog system](./legacy-catalogs) was used to manage Helm charts. \ No newline at end of file +In Rancher before v2.5, the [catalog system](./legacy-catalogs) was used to manage Helm charts. \ No newline at end of file diff --git a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/_index.md index e174ce6eea4..65f83844f4f 100644 --- a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/_index.md @@ -52,7 +52,7 @@ When you create a custom catalog, you will have to configure the catalog to use When you launch a new app from a catalog, the app will be managed by the catalog's Helm version. A Helm 2 catalog will use Helm 2 to manage all of the apps, and a Helm 3 catalog will use Helm 3 to manage all apps. -By default, catalogs are assumed to be deployed using Helm 2. If you run an app in Rancher prior to v2.4.0, then upgrade to Rancher v2.4.0+, the app will still be managed by Helm 2. If the app was already using a Helm 3 Chart (API version 2) it will no longer work in v2.4.0+. You must either downgrade the chart's API version or recreate the catalog to use Helm 3. +By default, catalogs are assumed to be deployed using Helm 2. If you run an app in Rancher before v2.4.0, then upgrade to Rancher v2.4.0+, the app will still be managed by Helm 2. If the app was already using a Helm 3 Chart (API version 2) it will no longer work in v2.4.0+. You must either downgrade the chart's API version or recreate the catalog to use Helm 3. Charts that are specific to Helm 2 should only be added to a Helm 2 catalog, and Helm 3 specific charts should only be added to a Helm 3 catalog. diff --git a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/_index.md index 9f96fa1ac3c..746b0f2e178 100644 --- a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/_index.md @@ -43,7 +43,7 @@ Private catalog repositories can be added using credentials like Username and Pa For more information on private Git/Helm catalogs, refer to the [custom catalog configuration reference.]({{}}/rancher/v2.x/en/catalog/catalog-config) - 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. + 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar. 2. Click **Add Catalog**. 3. Complete the form and click **Create**. @@ -56,7 +56,7 @@ For more information on private Git/Helm catalogs, refer to the [custom catalog >- [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) >- [Custom Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) role assigned. - 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. + 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar. 2. Click **Add Catalog**. 3. Complete the form. Select the Helm version that will be used to launch all of the apps in the catalog. For more information about the Helm version, refer to [this section.]( {{}}/rancher/v2.x/en/helm-charts/legacy-catalogs/#catalog-helm-deployment-versions) diff --git a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/built-in/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/built-in/_index.md index 8f31a3c958c..43b8c332feb 100644 --- a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/built-in/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/built-in/_index.md @@ -15,7 +15,7 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca >- [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) >- [Custom Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions-reference) role assigned. -1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. +1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar. 2. Toggle the default catalogs that you want to be enabled or disabled: @@ -23,4 +23,4 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca - **Helm Stable:** This catalog, which is maintained by the Kubernetes community, includes native [Helm charts](https://helm.sh/docs/chart_template_guide/). This catalog features the largest pool of apps. - **Helm Incubator:** Similar in user experience to Helm Stable, but this catalog is filled with applications in **beta**. - **Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions prior to v2.2.0, within a project, you can select **Catalog Apps** from the main navigation bar. + **Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions before v2.2.0, within a project, you can select **Catalog Apps** from the main navigation bar. diff --git a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md index fcd9b8e366b..798d3e955fa 100644 --- a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md @@ -26,7 +26,7 @@ Rancher's Global DNS feature provides a way to program an external DNS provider # Global DNS Providers -Prior to adding in Global DNS entries, you will need to configure access to an external provider. +Before adding in Global DNS entries, you will need to configure access to an external provider. The following table lists the first version of Rancher each provider debuted. diff --git a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/launching-apps/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/launching-apps/_index.md index ff64af67669..932c04cd648 100644 --- a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/launching-apps/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/launching-apps/_index.md @@ -28,7 +28,7 @@ Before launching an app, you'll need to either [enable a built-in global catalog 1. From the **Global** view, open the project that you want to deploy an app to. -2. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. +2. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. 3. Find the app that you want to launch, and then click **View Now**. @@ -47,7 +47,7 @@ Before launching an app, you'll need to either [enable a built-in global catalog 7. Review the files in **Preview**. When you're satisfied, click **Launch**. -**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's **Workloads** view or **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. +**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's **Workloads** view or **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view. # Configuration Options diff --git a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/managing-apps/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/managing-apps/_index.md index 465f3cd95ce..5873a303d2c 100644 --- a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/managing-apps/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/managing-apps/_index.md @@ -22,7 +22,7 @@ After an application is deployed, you can easily upgrade to a different template 1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade. -1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. +1. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. 3. Find the application that you want to upgrade, and then click the ⋮ to find **Upgrade**. @@ -39,7 +39,7 @@ After an application is deployed, you can easily upgrade to a different template **Result**: Your application is updated. You can view the application status from the project's: - **Workloads** view -- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. +- **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view. ### Rolling Back Catalog Applications @@ -48,7 +48,7 @@ After an application has been upgraded, you can easily rollback to a different t 1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade. -1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. +1. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. 3. Find the application that you want to rollback, and then click the ⋮ to find **Rollback**. @@ -63,7 +63,7 @@ After an application has been upgraded, you can easily rollback to a different t **Result**: Your application is updated. You can view the application status from the project's: - **Workloads** view -- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. +- **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view. ### Deleting Catalog Application Deployments diff --git a/content/rancher/v2.x/en/installation/_index.md b/content/rancher/v2.x/en/installation/_index.md index cb43a052f17..a37dd3b4ed5 100644 --- a/content/rancher/v2.x/en/installation/_index.md +++ b/content/rancher/v2.x/en/installation/_index.md @@ -78,7 +78,7 @@ For that reason, we recommend that for a production-grade architecture, you shou > > For Rancher v2.5, any Kubernetes cluster can be used. > For Rancher v2.4.x, either an RKE Kubernetes cluster or K3s Kubernetes cluster can be used. -> For Rancher prior to v2.4, an RKE cluster must be used. +> For Rancher before v2.4, an RKE cluster must be used. For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box. The Docker install allows you to explore the Rancher server functionality, but it is intended to be used for development and testing purposes only. diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md index bbc59c11ec8..9d3176cfee2 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md @@ -19,7 +19,7 @@ The cluster requirements depend on the Rancher version: - **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. > **Note:** To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#4-choose-your-ssl-configuration). - **In Rancher v2.4.x,** Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster. -- **In Rancher prior to v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster. +- **In Rancher before v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster. For the tutorial to install an RKE Kubernetes cluster, refer to [this page.]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-rke/) For help setting up the infrastructure for a high-availability RKE cluster, refer to [this page.]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/rollbacks/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/rollbacks/_index.md index 9758dd07086..fbe6fdb2966 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/rollbacks/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/rollbacks/_index.md @@ -80,7 +80,7 @@ Rancher can be rolled back using the Rancher UI. # Rolling Back to Rancher v2.2-v2.4+ -To roll back to Rancher prior to v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{}}/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/) Restoring a snapshot of the Rancher server cluster will revert Rancher to the version and state at the time of the snapshot. +To roll back to Rancher before v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{}}/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/) Restoring a snapshot of the Rancher server cluster will revert Rancher to the version and state at the time of the snapshot. For information on how to roll back Rancher installed with Docker, refer to [this page.]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/_index.md index 87b67939459..99a2a805323 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/_index.md @@ -35,7 +35,7 @@ During upgrades from Rancher v2.0.6- to Rancher v2.0.7+, all system namespaces a You can prevent cluster networking issues from occurring during your upgrade to v2.0.7+ by unassigning system namespaces from all of your Rancher projects. Complete this task if you've assigned any of a cluster's system namespaces into a Rancher project. -1. Log into the Rancher UI prior to upgrade. +1. Log into the Rancher UI before upgrade. 1. From the context menu, open the **local** cluster (or any of your other clusters). diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/_index.md index 0eab7f2d18c..b3cefbf0619 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/_index.md @@ -19,6 +19,6 @@ Since there is only one node and a single Docker container, if the node goes dow The ability to migrate Rancher to a high-availability cluster depends on the Rancher version: -- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher prior to v2.5, you may want to use a Kubernetes installation from the start. +- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher before v2.5, you may want to use a Kubernetes installation from the start. - For Rancher v2.5+, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{}}/rancher/v2.x/en/backups/v2.5/migrating-rancher/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md index 97025002cff..259b63095e9 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md @@ -25,7 +25,7 @@ This section describes installing Rancher in five parts: - [2. Choose your SSL Configuration](#2-choose-your-ssl-configuration) - [3. Render the Rancher Helm Template](#3-render-the-rancher-helm-template) - [4. Install Rancher](#4-install-rancher) -- [5. For Rancher versions prior to v2.3.0, Configure System Charts](#5-for-rancher-versions-prior-to-v2-3-0-configure-system-charts) +- [5. For Rancher versions before v2.3.0, Configure System Charts](#5-for-rancher-versions-before-v2-3-0-configure-system-charts) # 1. Add the Helm Chart Repository @@ -220,9 +220,9 @@ kubectl -n cattle-system apply -R -f ./rancher > **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{}}/rancher/v2.x/en/faq/telemetry/) during the initial login. Leaving this active in an air-gapped environment can cause issues if the sockets cannot be opened successfully. -# 5. For Rancher versions prior to v2.3.0, Configure System Charts +# 5. For Rancher versions before v2.3.0, Configure System Charts -If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/). +If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/). # Additional Resources @@ -255,7 +255,7 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher > - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{}}/rancher/v2.x/en/installation/options/custom-ca-root-certificate/). > - Record all transactions with the Rancher API? See [API Auditing]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log). -- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/) +- For Rancher before v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/) Choose from the following options: @@ -364,7 +364,7 @@ If you are installing Rancher v2.3.0+, the installation is complete. > **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{}}/rancher/v2.x/en/faq/telemetry/) during the initial login. -If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/). +If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/). {{% /tab %}} {{% /tabs %}} diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md index 1158e8ae38b..4fa8359ec10 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md @@ -9,7 +9,7 @@ aliases: This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.]({{}}/rancher/v2.x/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server. -For Rancher prior to v2.4, Rancher should be installed on an [RKE]({{}}/rke/latest/en/) (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. +For Rancher before v2.4, Rancher should be installed on an [RKE]({{}}/rke/latest/en/) (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. In Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. The Rancher management server can only be run on a Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time. diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md index 285df0b85c1..9b07532963a 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md @@ -17,7 +17,7 @@ In this installation scenario, you'll install Docker on a single Linux host, and A Docker installation of Rancher is recommended only for development and testing purposes. The ability to migrate Rancher to a high-availability cluster depends on the Rancher version: -- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher prior to v2.5, you may want to use a Kubernetes installation from the start. +- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher before v2.5, you may want to use a Kubernetes installation from the start. - For Rancher v2.5+, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{}}/rancher/v2.x/en/backups/v2.5/migrating-rancher/) diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md index b4e425f0ee7..6d64d85e090 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md @@ -44,7 +44,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s 1. Using a remote Terminal connection, log into the node running your Rancher Server. -1. Pull the version of Rancher that you were running prior to upgrade. Replace the `` with that version. +1. Pull the version of Rancher that you were running before upgrade. Replace the `` with that version. For example, if you were running Rancher v2.0.5 before upgrade, pull v2.0.5. @@ -85,4 +85,4 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s 1. Wait a few moments and then open Rancher in a web browser. Confirm that the rollback succeeded and that your data is restored. -**Result:** Rancher is rolled back to its version and data state prior to upgrade. +**Result:** Rancher is rolled back to its version and data state before upgrade. diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md index ab8530d16bc..c92de6a5437 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md @@ -252,7 +252,7 @@ As of Rancher v2.5, privileged access is [required.]({{}}/rancher/v2.x/ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster. -> For Rancher versions from v2.2.0 to v2.2.x, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/) +> For Rancher versions from v2.2.0 to v2.2.x, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/) When starting the new Rancher server container, choose from the following options: diff --git a/content/rancher/v2.x/en/installation/requirements/_index.md b/content/rancher/v2.x/en/installation/requirements/_index.md index 7768a463908..8dba6f9a87d 100644 --- a/content/rancher/v2.x/en/installation/requirements/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/_index.md @@ -16,7 +16,7 @@ Make sure the node(s) for the Rancher server fulfill the following requirements: - [RKE and Hosted Kubernetes](#rke-and-hosted-kubernetes) - [K3s Kubernetes](#k3s-kubernetes) - [RancherD](#rancherd) - - [CPU and Memory for Rancher prior to v2.4.0](#cpu-and-memory-for-rancher-prior-to-v2-4-0) + - [CPU and Memory for Rancher before v2.4.0](#cpu-and-memory-for-rancher-before-v2-4-0) - [Disks](#disks) - [Networking Requirements](#networking-requirements) - [Node IP Addresses](#node-ip-addresses) @@ -87,7 +87,7 @@ These CPU and memory requirements apply to each host in the Kubernetes cluster w These requirements apply to RKE Kubernetes clusters, as well as to hosted Kubernetes clusters such as EKS. -Performance increased in Rancher v2.4.0. For the requirements of Rancher prior to v2.4.0, refer to [this section.](#cpu-and-memory-for-rancher-prior-to-v2-4-0) +Performance increased in Rancher v2.4.0. For the requirements of Rancher before v2.4.0, refer to [this section.](#cpu-and-memory-for-rancher-before-v2-4-0) | Deployment Size | Clusters | Nodes | vCPUs | RAM | | --------------- | ---------- | ------------ | -------| ------- | @@ -133,10 +133,10 @@ These CPU and memory requirements apply to a host with a [single-node]({{}}/rancher/v2.x/en/installation/options/local-system-charts/). +If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/options/local-system-charts/). ### Additional Resources @@ -238,7 +238,7 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher > - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{}}/rancher/v2.x/en/installation/options/chart-options/#additional-trusted-cas). > - Record all transactions with the Rancher API? See [API Auditing]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log). -- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{}}/rancher/v2.x/en/installation/options/local-system-charts/) +- For Rancher before v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{}}/rancher/v2.x/en/installation/options/local-system-charts/) Choose from the following options: @@ -328,7 +328,7 @@ docker run -d --restart=unless-stopped \ If you are installing Rancher v2.3.0+, the installation is complete. -If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/options/local-system-charts/). +If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/options/local-system-charts/). {{% /tab %}} {{% /tabs %}} diff --git a/content/rancher/v2.x/en/installation/resources/advanced/api-audit-log/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/api-audit-log/_index.md index a5a08192554..968ac51ea2e 100644 --- a/content/rancher/v2.x/en/installation/resources/advanced/api-audit-log/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/api-audit-log/_index.md @@ -64,7 +64,7 @@ kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log #### Rancher Web GUI 1. From the context menu, select **Cluster: local > System**. -1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the `cattle-system` namespace. Open the `rancher` workload by clicking its link. +1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the `cattle-system` namespace. Open the `rancher` workload by clicking its link. 1. Pick one of the `rancher` pods and select **⋮ > View Logs**. 1. From the **Logs** drop-down, select `rancher-audit-log`. diff --git a/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md b/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md index e05682c8ecd..0e7c1a26507 100644 --- a/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md +++ b/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md @@ -33,7 +33,7 @@ Rancher provides several different Helm chart repositories to choose from. We al
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository). -> **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` prior to v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`. +> **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` before v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`. ### Helm Chart Versions diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/_index.md index 895a2ba95ae..a7004eda44e 100644 --- a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/_index.md @@ -5,7 +5,7 @@ weight: 4 This section contains information on how to install a Kubernetes cluster that the Rancher server can be installed on. -In Rancher prior to v2.4, the Rancher server needed to run on an RKE Kubernetes cluster. +In Rancher before v2.4, the Rancher server needed to run on an RKE Kubernetes cluster. In Rancher v2.4.x, Rancher need to run on either an RKE Kubernetes cluster or a K3s Kubernetes cluster. diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE/_index.md index aaa9c964713..7ede4383f28 100644 --- a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE/_index.md @@ -9,7 +9,7 @@ aliases: This section describes how to install a Kubernetes cluster. This cluster should be dedicated to run only the Rancher server. -For Rancher prior to v2.4, Rancher should be installed on an RKE Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. +For Rancher before v2.4, Rancher should be installed on an RKE Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. As of Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time. diff --git a/content/rancher/v2.x/en/installation/resources/local-system-charts/_index.md b/content/rancher/v2.x/en/installation/resources/local-system-charts/_index.md index eec77dee63b..f4f70dc7788 100644 --- a/content/rancher/v2.x/en/installation/resources/local-system-charts/_index.md +++ b/content/rancher/v2.x/en/installation/resources/local-system-charts/_index.md @@ -9,7 +9,7 @@ aliases: The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. -In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions prior to v2.3.0. +In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions before v2.3.0. # Using Local System Charts in Rancher v2.3.0 @@ -17,7 +17,7 @@ In Rancher v2.3.0, a local copy of `system-charts` has been packaged into the `r Example commands for a Rancher installation with a bundled `system-charts` are included in the [air gap Docker installation]({{}}/rancher/v2.x/en/installation/air-gap-single-node/install-rancher) instructions and the [air gap Kubernetes installation]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/) instructions. -# Setting Up System Charts for Rancher Prior to v2.3.0 +# Setting Up System Charts for Rancher Before v2.3.0 ### A. Prepare System Charts diff --git a/content/rancher/v2.x/en/istio/v2.5/configuration-reference/selectors-and-scrape/_index.md b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/selectors-and-scrape/_index.md index f0f3d415e08..253bcd5917e 100644 --- a/content/rancher/v2.x/en/istio/v2.5/configuration-reference/selectors-and-scrape/_index.md +++ b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/selectors-and-scrape/_index.md @@ -88,7 +88,7 @@ spec: This enables monitoring across namespaces by giving Prometheus additional scrape configurations. -The usability tradeoff is that all of Prometheus' `additionalScrapeConfigs` are maintained in a single Secret. This could make upgrading difficult if monitoring is already deployed with additionalScrapeConfigs prior to installing Istio. +The usability tradeoff is that all of Prometheus' `additionalScrapeConfigs` are maintained in a single Secret. This could make upgrading difficult if monitoring is already deployed with additionalScrapeConfigs before installing Istio. 1. If starting a new install, **Click** the **rancher-monitoring** chart, then in **Chart Options** click **Edit as Yaml**. 1. If updating an existing installation, click on **Upgrade**, then in **Chart Options** click **Edit as Yaml**. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md index b6a10e251d8..b423c3f8e34 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md @@ -15,7 +15,7 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c 1. From the **Global** view, select the project where you want to deploy your ingress. -1. From the main menu, select **Resources > Secrets > Certificates**. Click **Add Certificate**. (For Rancher prior to v2.3, click **Resources > Certificates.**) +1. From the main menu, select **Resources > Secrets > Certificates**. Click **Add Certificate**. (For Rancher before v2.3, click **Resources > Certificates.**) 1. Enter a **Name** for the certificate. @@ -39,7 +39,7 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c - If you added an SSL certificate to the project, the certificate is available for deployments created in any project namespace. - If you added an SSL certificate to a namespace, the certificate is available only for deployments in that namespace. -- Your certificate is added to the **Resources > Secrets > Certificates** view. (For Rancher prior to v2.3, it is added to **Resources > Certificates.**) +- Your certificate is added to the **Resources > Secrets > Certificates** view. (For Rancher before v2.3, it is added to **Resources > Certificates.**) ## What's Next? diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md index cf5c3c1360b..421c2cb9f87 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md @@ -22,12 +22,12 @@ The way that you manage HPAs is different based on your version of the Kubernete HPAs are also managed differently based on your version of Rancher: - **For Rancher v2.3.0+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). -- **For Rancher Prior to v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl). +- **For Rancher Before v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl). You might have additional HPA installation steps if you are using an older version of Rancher: - **For Rancher v2.0.7+:** Clusters created in Rancher v2.0.7 and higher automatically have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA. -- **For Rancher Prior to v2.0.7:** Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). +- **For Rancher Before v2.0.7:** Clusters created in Rancher before v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). ## Testing HPAs with a Service Deployment diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md index fc54792f5e6..989eb74cd32 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md @@ -5,7 +5,7 @@ aliases: - /rancher/v2.x/en/k8s-in-rancher/horizontal-pod-autoscaler/hpa-for-rancher-before-2_0_7 --- -This section describes how to manually install HPAs for clusters created with Rancher prior to v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA. +This section describes how to manually install HPAs for clusters created with Rancher before v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA. Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md index 1de89cc7be4..b6e068b2a46 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md @@ -17,7 +17,7 @@ This section describes HPA management with `kubectl`. This document has instruct In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to [Managing HPAs with the Rancher UI]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). For scaling HPAs based on other metrics than CPU or memory, you still need `kubectl`. -### Note For Rancher Prior to v2.0.7 +### Note For Rancher Before v2.0.7 Clusters created with older versions of Rancher don't automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). diff --git a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md index cb7431bf6a7..3fbc2fb2c25 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md @@ -10,7 +10,7 @@ aliases: Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{}}/rancher/v2.x/en/catalog/globaldns/). 1. From the **Global** view, open the project that you want to add ingress to. -1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions prior to v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**. +1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions before v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**. 1. Enter a **Name** for the ingress. 1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new namespace on the fly by clicking **Add to a new namespace**. 1. Create ingress forwarding **Rules**. For help configuring the rules, refer to [this section.](#ingress-rule-configuration) If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md index d69751a0b14..0898d6bdc32 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md @@ -24,7 +24,7 @@ Currently, deployments pull the private registry credentials automatically only 1. From the **Global** view, select the project containing the namespace(s) where you want to add a registry. -1. From the main menu, click **Resources > Secrets > Registry Credentials.** (For Rancher prior to v2.3, click **Resources > Registries.)** +1. From the main menu, click **Resources > Secrets > Registry Credentials.** (For Rancher before v2.3, click **Resources > Registries.)** 1. Click **Add Registry.** @@ -53,7 +53,7 @@ You can deploy a workload with an image from a private registry through the Ranc To deploy a workload with an image from your private registry, 1. Go to the project view, -1. Click **Resources > Workloads.** In versions prior to v2.3.0, go to the **Workloads** tab. +1. Click **Resources > Workloads.** In versions before v2.3.0, go to the **Workloads** tab. 1. Click **Deploy.** 1. Enter a unique name for the workload and choose a namespace. 1. In the **Docker Image** field, enter the URL of the path to the Docker image in your private registry. For example, if your private registry is on Quay.io, you could use `quay.io//`. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md index a2e2d2d0815..20025ade565 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md @@ -13,7 +13,7 @@ However, you also have the option of creating additional Service Discovery recor 1. From the **Global** view, open the project that you want to add a DNS record to. -1. Click **Resources** in the main navigation bar. Click the **Service Discovery** tab. (In versions prior to v2.3.0, just click the **Service Discovery** tab.) Then click **Add Record**. +1. Click **Resources** in the main navigation bar. Click the **Service Discovery** tab. (In versions before v2.3.0, just click the **Service Discovery** tab.) Then click **Add Record**. 1. Enter a **Name** for the DNS record. This name is used for DNS resolution. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md index 1659afde060..75f40805972 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md @@ -9,7 +9,7 @@ A _sidecar_ is a container that extends or enhances the main container in a pod. 1. From the **Global** view, open the project running the workload you want to add a sidecar to. -1. Click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab. +1. Click **Resources > Workloads.** In versions before v2.3.0, select the **Workloads** tab. 1. Find the workload that you want to extend. Select **⋮ icon (...) > Add a Sidecar**. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md index 22e390f115b..1e9892d978b 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md @@ -11,7 +11,7 @@ Deploy a workload to run an application in one or more containers. 1. From the **Global** view, open the project that you want to deploy a workload to. -1. 1. Click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) From the **Workloads** view, click **Deploy**. +1. 1. Click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) From the **Workloads** view, click **Deploy**. 1. Enter a **Name** for the workload. diff --git a/content/rancher/v2.x/en/logging/v2.0.x-v2.4.x/_index.md b/content/rancher/v2.x/en/logging/v2.0.x-v2.4.x/_index.md index b9e9e219e14..33c8ded537c 100644 --- a/content/rancher/v2.x/en/logging/v2.0.x-v2.4.x/_index.md +++ b/content/rancher/v2.x/en/logging/v2.0.x-v2.4.x/_index.md @@ -5,7 +5,7 @@ weight: 2 --- -This section contains documentation for the logging features that were available in Rancher prior to v2.5. +This section contains documentation for the logging features that were available in Rancher before v2.5. - [Cluster logging](./cluster-logging) - [Project logging](./project-logging) \ No newline at end of file diff --git a/content/rancher/v2.x/en/logging/v2.0.x-v2.4.x/project-logging/_index.md b/content/rancher/v2.x/en/logging/v2.0.x-v2.4.x/project-logging/_index.md index d4547dc6910..74c738cf590 100644 --- a/content/rancher/v2.x/en/logging/v2.0.x-v2.4.x/project-logging/_index.md +++ b/content/rancher/v2.x/en/logging/v2.0.x-v2.4.x/project-logging/_index.md @@ -59,7 +59,7 @@ Logs that are sent to your logging service are from the following locations: 1. From the **Global** view, navigate to the project that you want to configure project logging. -1. Select **Tools > Logging** in the navigation bar. In versions prior to v2.2.0, you can choose **Resources > Logging**. +1. Select **Tools > Logging** in the navigation bar. In versions before v2.2.0, you can choose **Resources > Logging**. 1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports the following services: diff --git a/content/rancher/v2.x/en/longhorn/_index.md b/content/rancher/v2.x/en/longhorn/_index.md index 23759df8a4e..ed5a42b1370 100644 --- a/content/rancher/v2.x/en/longhorn/_index.md +++ b/content/rancher/v2.x/en/longhorn/_index.md @@ -24,7 +24,7 @@ With Longhorn, you can: ### New in Rancher v2.5 -Prior to Rancher v2.5, Longhorn could be installed as a Rancher catalog app. In Rancher v2.5, the catalog system was replaced by the **Apps & Marketplace,** and it became possible to install Longhorn as an app from that page. +Before Rancher v2.5, Longhorn could be installed as a Rancher catalog app. In Rancher v2.5, the catalog system was replaced by the **Apps & Marketplace,** and it became possible to install Longhorn as an app from that page. The **Cluster Explorer** now allows you to manipulate Longhorn's Kubernetes resources from the Rancher UI. So now you can control the Longhorn functionality with the Longhorn UI, or with kubectl, or by manipulating Longhorn's Kubernetes custom resources in the Rancher UI. diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/_index.md index d95422eb05c..8a5fc177792 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/_index.md @@ -4,7 +4,7 @@ shortTitle: Rancher v2.0-v2.4 weight: 2 --- -This section contains documentation related to the monitoring features available in Rancher prior to v2.5. +This section contains documentation related to the monitoring features available in Rancher before v2.5. diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/project-alerts/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/project-alerts/_index.md index c75adf126fc..041692b3336 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/project-alerts/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/project-alerts/_index.md @@ -53,7 +53,7 @@ For information on other default alerts, refer to the section on [cluster-level >**Prerequisite:** Before you can receive project alerts, you must add a notifier. -1. From the **Global** view, navigate to the project that you want to configure project alerts for. Select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**. +1. From the **Global** view, navigate to the project that you want to configure project alerts for. Select **Tools > Alerts**. In versions before v2.2.0, you can choose **Resources > Alerts**. 1. Click **Add Alert Group**. @@ -75,7 +75,7 @@ For information on other default alerts, refer to the section on [cluster-level # Managing Project Alerts -To manage project alerts, browse to the project that alerts you want to manage. Then select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**. You can: +To manage project alerts, browse to the project that alerts you want to manage. Then select **Tools > Alerts**. In versions before v2.2.0, you can choose **Resources > Alerts**. You can: - Deactivate/Reactive alerts - Edit alert settings diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/cluster-metrics/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/cluster-metrics/_index.md index 1be8c9eca40..8d28fbd287d 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/cluster-metrics/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/cluster-metrics/_index.md @@ -105,7 +105,7 @@ Workload metrics display the hardware utilization for a Kubernetes workload. You 1. From the **Global** view, navigate to the project that you want to view workload metrics. -1. From the main navigation bar, choose **Resources > Workloads.** In versions prior to v2.3.0, choose **Workloads** on the main navigation bar. +1. From the main navigation bar, choose **Resources > Workloads.** In versions before v2.3.0, choose **Workloads** on the main navigation bar. 1. Select a specific workload and click on its name. diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/project-monitoring/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/project-monitoring/_index.md index c0d5d7daca8..802da91e7d5 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/project-monitoring/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/project-monitoring/_index.md @@ -72,7 +72,7 @@ To access a project-level Grafana instance, 1. Go to a project that has monitoring enabled. -1. From the project view, click **Apps.** In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. +1. From the project view, click **Apps.** In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. 1. Go to the `project-monitoring` application. diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/viewing-metrics/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/viewing-metrics/_index.md index 766bc5ada6c..70c64d2ee4e 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/viewing-metrics/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/viewing-metrics/_index.md @@ -19,9 +19,9 @@ Rancher's dashboards are available at multiple locations: - **Cluster Dashboard**: From the **Global** view, navigate to the cluster. - **Node Metrics**: From the **Global** view, navigate to the cluster. Select **Nodes**. Find the individual node and click on its name. Click **Node Metrics.** -- **Workload Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Click **Workload Metrics.** +- **Workload Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Click **Workload Metrics.** - **Pod Metrics**: From the **Global** view, navigate to the project. Select **Workloads > Workloads**. Find the individual workload and click on its name. Find the individual pod and click on its name. Click **Pod Metrics.** -- **Container Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Find the individual pod and click on its name. Find the individual container and click on its name. Click **Container Metrics.** +- **Container Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Find the individual pod and click on its name. Find the individual container and click on its name. Click **Container Metrics.** Prometheus metrics are displayed and are denoted with the Grafana icon. If you click on the icon, the metrics will open a new tab in Grafana. @@ -53,7 +53,7 @@ When you go to the Grafana instance, you will be logged in with the username `ad 1. Go to the **System** project view. This project is where the cluster-level Grafana instance runs. -1. Click **Apps.** In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. +1. Click **Apps.** In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. 1. Go to the `cluster-monitoring` application. diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.5/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.5/_index.md index 19dc1c0a70b..1249041c229 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.5/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.5/_index.md @@ -19,7 +19,7 @@ Rancher's solution allows users to: More information about the resources that get deployed onto your cluster to support this solution can be found in the [`rancher-monitoring`](https://github.com/rancher/charts/tree/main/charts/rancher-monitoring) Helm chart, which closely tracks the upstream [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) Helm chart maintained by the Prometheus community with certain changes tracked in the [CHANGELOG.md](https://github.com/rancher/charts/blob/main/charts/rancher-monitoring/CHANGELOG.md). -> If you previously enabled Monitoring, Alerting, or Notifiers in Rancher prior to v2.5, there is no upgrade path for switching to the new monitoring/ alerting solution. You will need to disable monitoring/ alerting/notifiers in Cluster Manager before deploying the new monitoring solution via Cluster Explorer. +> If you previously enabled Monitoring, Alerting, or Notifiers in Rancher before v2.5, there is no upgrade path for switching to the new monitoring/ alerting solution. You will need to disable monitoring/ alerting/notifiers in Cluster Manager before deploying the new monitoring solution via Cluster Explorer. For more information about upgrading the Monitoring app in Rancher 2.5, please refer to the [migration docs](./migrating). diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.5/migrating/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.5/migrating/_index.md index 2960a60d8a9..3d1a5bd98ab 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.5/migrating/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.5/migrating/_index.md @@ -5,11 +5,11 @@ aliases: - /rancher/v2.x/en/monitoring-alerting/migrating --- -If you previously enabled Monitoring, Alerting, or Notifiers in Rancher prior to v2.5, there is no automatic upgrade path for switching to the new monitoring/alerting solution. Before deploying the new monitoring solution via Cluster Explore, you will need to disable and remove all existing custom alerts, notifiers and monitoring installations for the whole cluster and in all projects. +If you previously enabled Monitoring, Alerting, or Notifiers in Rancher before v2.5, there is no automatic upgrade path for switching to the new monitoring/alerting solution. Before deploying the new monitoring solution via Cluster Explore, you will need to disable and remove all existing custom alerts, notifiers and monitoring installations for the whole cluster and in all projects. -### Monitoring Prior to Rancher v2.5 +### Monitoring Before Rancher v2.5 -As of v2.2.0, Rancher's Cluster Manager allowed users to enable Monitoring & Alerting V1 (both powered by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) independently within a cluster. For more information on how to configure Monitoring & Alerting V1, see the [docs about monitoring prior to Rancher v2.5]({{}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x). +As of v2.2.0, Rancher's Cluster Manager allowed users to enable Monitoring & Alerting V1 (both powered by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) independently within a cluster. For more information on how to configure Monitoring & Alerting V1, see the [docs about monitoring before Rancher v2.5]({{}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x). When Monitoring is enabled, Monitoring V1 deploys [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/docs/grafana/latest/getting-started/what-is-grafana/) onto a cluster to monitor the state of processes of your cluster nodes, Kubernetes components, and software deployments and create custom dashboards to make it easy to visualize collected metrics. diff --git a/content/rancher/v2.x/en/overview/architecture-recommendations/_index.md b/content/rancher/v2.x/en/overview/architecture-recommendations/_index.md index 367e66a9ac8..3b46c954231 100644 --- a/content/rancher/v2.x/en/overview/architecture-recommendations/_index.md +++ b/content/rancher/v2.x/en/overview/architecture-recommendations/_index.md @@ -43,7 +43,7 @@ The option to install Rancher on a K3s cluster is a feature introduced in Ranche ### RKE Kubernetes Cluster Installations -If you are installing Rancher prior to v2.4, you will need to install Rancher on an RKE cluster, in which the cluster data is stored on each node with the etcd role. As of Rancher v2.4, there is no migration path to transition the Rancher server from an RKE cluster to a K3s cluster. All versions of the Rancher server, including v2.4+, can be installed on an RKE cluster. +If you are installing Rancher before v2.4, you will need to install Rancher on an RKE cluster, in which the cluster data is stored on each node with the etcd role. As of Rancher v2.4, there is no migration path to transition the Rancher server from an RKE cluster to a K3s cluster. All versions of the Rancher server, including v2.4+, can be installed on an RKE cluster. In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails. diff --git a/content/rancher/v2.x/en/overview/architecture/_index.md b/content/rancher/v2.x/en/overview/architecture/_index.md index ea9ef16fa50..b4c547f89c9 100644 --- a/content/rancher/v2.x/en/overview/architecture/_index.md +++ b/content/rancher/v2.x/en/overview/architecture/_index.md @@ -45,7 +45,7 @@ A high-availability Kubernetes installation is recommended for production. A Docker installation of Rancher is recommended only for development and testing purposes. The ability to migrate Rancher to a high-availability cluster depends on the Rancher version: -- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher prior to v2.5, you may want to use a Kubernetes installation from the start. +- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher before v2.5, you may want to use a Kubernetes installation from the start. - For Rancher v2.5+, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{}}/rancher/v2.x/en/backups/v2.5/migrating-rancher/) diff --git a/content/rancher/v2.x/en/pipelines/_index.md b/content/rancher/v2.x/en/pipelines/_index.md index c1293be7518..107914aa2c7 100644 --- a/content/rancher/v2.x/en/pipelines/_index.md +++ b/content/rancher/v2.x/en/pipelines/_index.md @@ -101,7 +101,7 @@ Select your provider's tab below and follow the directions. {{% tab "GitHub" %}} 1. From the **Global** view, navigate to the project that you want to configure pipelines. -1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. +1. Select **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**. 1. Follow the directions displayed to **Setup a Github application**. Rancher redirects you to Github to setup an OAuth App in Github. @@ -118,7 +118,7 @@ _Available as of v2.1.0_ 1. From the **Global** view, navigate to the project that you want to configure pipelines. -1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. +1. Select **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**. 1. Follow the directions displayed to **Setup a GitLab application**. Rancher redirects you to GitLab. @@ -182,7 +182,7 @@ After the version control provider is authorized, you are automatically re-direc 1. From the **Global** view, navigate to the project that you want to configure pipelines. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. Click on **Configure Repositories**. @@ -200,7 +200,7 @@ Now that repositories are added to your project, you can start configuring the p 1. From the **Global** view, navigate to the project that you want to configure pipelines. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. Find the repository that you want to set up a pipeline for. @@ -243,7 +243,7 @@ The configuration reference also covers how to configure: # Running your Pipelines -Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions prior to v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **⋮ > Run**. +Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions before v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **⋮ > Run**. During this initial run, your pipeline is tested, and the following pipeline components are deployed to your project as workloads in a new namespace dedicated to the pipeline: @@ -269,7 +269,7 @@ Available Events: 1. From the **Global** view, navigate to the project that you want to modify the event trigger for the pipeline. -1. 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. 1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. Find the repository that you want to modify the event triggers. Select the vertical **⋮ > Setting**. diff --git a/content/rancher/v2.x/en/pipelines/config/_index.md b/content/rancher/v2.x/en/pipelines/config/_index.md index 639aed2d570..124678f0adc 100644 --- a/content/rancher/v2.x/en/pipelines/config/_index.md +++ b/content/rancher/v2.x/en/pipelines/config/_index.md @@ -393,7 +393,7 @@ This section covers the following topics: 1. From the **Global** view, navigate to the project that you want to configure a pipeline trigger rule. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**. @@ -411,7 +411,7 @@ This section covers the following topics: 1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**. @@ -436,7 +436,7 @@ This section covers the following topics: 1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**. @@ -491,7 +491,7 @@ When configuring a pipeline, certain [step types](#step-types) allow you to use 1. From the **Global** view, navigate to the project that you want to configure pipelines. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**. @@ -534,7 +534,7 @@ Create a secret in the same project as your pipeline, or explicitly in the names 1. From the **Global** view, navigate to the project that you want to configure pipelines. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**. @@ -584,7 +584,7 @@ Variable Name | Description # Global Pipeline Execution Settings -After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. +After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**. - [Executor Quota](#executor-quota) - [Resource Quota for Executors](#resource-quota-for-executors) diff --git a/content/rancher/v2.x/en/pipelines/docs-for-v2.0.x/_index.md b/content/rancher/v2.x/en/pipelines/docs-for-v2.0.x/_index.md index 7f9f61b3950..3dc486eef9f 100644 --- a/content/rancher/v2.x/en/pipelines/docs-for-v2.0.x/_index.md +++ b/content/rancher/v2.x/en/pipelines/docs-for-v2.0.x/_index.md @@ -37,7 +37,7 @@ You can set up your pipeline to run a series of stages and steps to test your co 1. Go to the project you want this pipeline to run in. -2. Click **Resources > Pipelines.** In versions prior to v2.3.0,click **Workloads > Pipelines.** +2. Click **Resources > Pipelines.** In versions before v2.3.0,click **Workloads > Pipelines.** 4. Click Add pipeline button. diff --git a/content/rancher/v2.x/en/pipelines/example-repos/_index.md b/content/rancher/v2.x/en/pipelines/example-repos/_index.md index a1a2a4b7995..a14cd85fd08 100644 --- a/content/rancher/v2.x/en/pipelines/example-repos/_index.md +++ b/content/rancher/v2.x/en/pipelines/example-repos/_index.md @@ -26,7 +26,7 @@ By default, the example pipeline repositories are disabled. Enable one (or more) 1. From the **Global** view, navigate to the project that you want to test out pipelines. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. Click **Configure Repositories**. @@ -52,7 +52,7 @@ After enabling an example repository, review the pipeline to see how it is set u 1. From the **Global** view, navigate to the project that you want to test out pipelines. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. Find the example repository, select the vertical **⋮**. There are two ways to view the pipeline: * **Rancher UI**: Click on **Edit Config** to view the stages and steps of the pipeline. @@ -64,7 +64,7 @@ After enabling an example repository, run the pipeline to see how it works. 1. From the **Global** view, navigate to the project that you want to test out pipelines. -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** +1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.** 1. Find the example repository, select the vertical **⋮ > Run**. diff --git a/content/rancher/v2.x/en/pipelines/storage/_index.md b/content/rancher/v2.x/en/pipelines/storage/_index.md index f67502901e5..bc6decb2c83 100644 --- a/content/rancher/v2.x/en/pipelines/storage/_index.md +++ b/content/rancher/v2.x/en/pipelines/storage/_index.md @@ -15,7 +15,7 @@ This section assumes that you understand how persistent storage works in Kuberne ### A. Configuring Persistent Data for Docker Registry -1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab. +1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions before v2.3.0, select the **Workloads** tab. 1. Find the `docker-registry` workload and select **⋮ > Edit**. @@ -61,7 +61,7 @@ This section assumes that you understand how persistent storage works in Kuberne ### B. Configuring Persistent Data for Minio -1. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **⋮ > Edit**. +1. From the project view, click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **⋮ > Edit**. 1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section: diff --git a/content/rancher/v2.x/en/project-admin/resource-quotas/override-container-default/_index.md b/content/rancher/v2.x/en/project-admin/resource-quotas/override-container-default/_index.md index 1c15bad1155..2827e3a2f37 100644 --- a/content/rancher/v2.x/en/project-admin/resource-quotas/override-container-default/_index.md +++ b/content/rancher/v2.x/en/project-admin/resource-quotas/override-container-default/_index.md @@ -27,7 +27,7 @@ Edit [container default resource limit]({{}}/rancher/v2.x/en/k8s-in-ran When the default container resource limit is set at a project level, the parameter will be propagated to any namespace created in the project after the limit has been set. For any existing namespace in a project, this limit will not be automatically propagated. You will need to manually set the default container resource limit for any existing namespaces in the project in order for it to be used when creating any containers. -> **Note:** Prior to v2.2.0, you could not launch catalog applications that did not have any limits set. With v2.2.0, you can set a default container resource limit on a project and launch any catalog applications. +> **Note:** Before v2.2.0, you could not launch catalog applications that did not have any limits set. With v2.2.0, you can set a default container resource limit on a project and launch any catalog applications. Once a container default resource limit is configured on a namespace, the default will be pre-populated for any containers created in that namespace. These limits/reservations can always be overridden during workload creation. diff --git a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md index 3580b314f71..8819878d4ca 100644 --- a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md @@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World. 3. Open the **Project: Default** project. -4. Click **Resources > Workloads.** In versions prior to v2.3.0, click **Workloads > Workloads.** +4. Click **Resources > Workloads.** In versions before v2.3.0, click **Workloads > Workloads.** 5. Click **Deploy**. @@ -49,7 +49,7 @@ Now that the application is up and running it needs to be exposed so that other 3. Open the **Default** project. -4. Click **Resources > Workloads > Load Balancing.** In versions prior to v2.3.0, click the **Workloads** tab. Click on the **Load Balancing** tab. +4. Click **Resources > Workloads > Load Balancing.** In versions before v2.3.0, click the **Workloads** tab. Click on the **Load Balancing** tab. 5. Click **Add Ingress**. diff --git a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md index fbe0f995ce6..2920ee57953 100644 --- a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md @@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World. 3. Open the **Project: Default** project. -4. Click **Resources > Workloads.** In versions prior to v2.3.0, click **Workloads > Workloads.** +4. Click **Resources > Workloads.** In versions before v2.3.0, click **Workloads > Workloads.** 5. Click **Deploy**. diff --git a/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/hardening-2.3.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/hardening-2.3.5/_index.md index 1701e56ff39..723a700a630 100644 --- a/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/hardening-2.3.5/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/hardening-2.3.5/_index.md @@ -44,7 +44,7 @@ kernel.keys.root_maxbytes=25000000 Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. ### Configure `etcd` user and group -A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. +A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. #### create `etcd` user and group To create the **etcd** group run the following console commands. diff --git a/content/rancher/v2.x/en/security/rancher-2.4/hardening-2.4/_index.md b/content/rancher/v2.x/en/security/rancher-2.4/hardening-2.4/_index.md index 583080c10af..857aefbadaf 100644 --- a/content/rancher/v2.x/en/security/rancher-2.4/hardening-2.4/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.4/hardening-2.4/_index.md @@ -44,7 +44,7 @@ kernel.keys.root_maxbytes=25000000 Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. ### Configure `etcd` user and group -A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. +A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. #### create `etcd` user and group To create the **etcd** group run the following console commands. diff --git a/content/rancher/v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md index cb5f7d2156e..2e163411676 100644 --- a/content/rancher/v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.5/1.5-hardening-2.5/_index.md @@ -41,7 +41,7 @@ kernel.keys.root_maxbytes=25000000 Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. ### Configure `etcd` user and group -A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. +A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. #### create `etcd` user and group To create the **etcd** group run the following console commands. diff --git a/content/rancher/v2.x/en/security/rancher-2.5/1.6-hardening-2.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.5/1.6-hardening-2.5/_index.md index efc9c393e69..9ffbd480966 100644 --- a/content/rancher/v2.x/en/security/rancher-2.5/1.6-hardening-2.5/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.5/1.6-hardening-2.5/_index.md @@ -41,7 +41,7 @@ kernel.keys.root_maxbytes=25000000 Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. ### Configure `etcd` user and group -A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. +A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. #### create `etcd` user and group To create the **etcd** group run the following console commands. diff --git a/content/rancher/v2.x/en/v1.6-migration/discover-services/_index.md b/content/rancher/v2.x/en/v1.6-migration/discover-services/_index.md index 0df7741ae6b..74147a826f0 100644 --- a/content/rancher/v2.x/en/v1.6-migration/discover-services/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/discover-services/_index.md @@ -71,7 +71,7 @@ In the image below, the `web-deployment.yml` and `web-service.yml` files [create Just as you can create an alias for Rancher v1.6 services, you can do the same for Rancher v2.x workloads. Similarly, you can also create DNS records pointing to services running externally, using either their hostname or IP address. These DNS records are Kubernetes service objects. -Using the v2.x UI, use the context menu to navigate to the `Project` view. Then click **Resources > Workloads > Service Discovery.** (In versions prior to v2.3.0, click the **Workloads > Service Discovery** tab.) All existing DNS records created for your workloads are listed under each namespace. +Using the v2.x UI, use the context menu to navigate to the `Project` view. Then click **Resources > Workloads > Service Discovery.** (In versions before v2.3.0, click the **Workloads > Service Discovery** tab.) All existing DNS records created for your workloads are listed under each namespace. Click **Add Record** to create new DNS records. Then view the various options supported to link to external services or to create aliases for another workload, DNS record, or set of pods. diff --git a/content/rancher/v2.x/en/v1.6-migration/load-balancing/_index.md b/content/rancher/v2.x/en/v1.6-migration/load-balancing/_index.md index 183eef1bee3..b25e5709e35 100644 --- a/content/rancher/v2.x/en/v1.6-migration/load-balancing/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/load-balancing/_index.md @@ -74,14 +74,14 @@ Although Rancher v2.x supports HTTP and HTTPS hostname and path-based load balan ## Deploying Ingress -You can launch a new load balancer to replace your load balancer from v1.6. Using the Rancher v2.x UI, browse to the applicable project and choose **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Deploy**. During deployment, you can choose a target project or namespace. +You can launch a new load balancer to replace your load balancer from v1.6. Using the Rancher v2.x UI, browse to the applicable project and choose **Resources > Workloads > Load Balancing.** (In versions before v2.3.0, click **Workloads > Load Balancing.**) Then click **Deploy**. During deployment, you can choose a target project or namespace. >**Prerequisite:** Before deploying Ingress, you must have a workload deployed that's running a scale of two or more pods. > ![Workload Scale]({{}}/img/rancher/workload-scale.png) -For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and click **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects. +For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and click **Resources > Workloads > Load Balancing.** (In versions before v2.3.0, click **Workloads > Load Balancing.**) Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects.
Browsing to Load Balancer Tab and Adding Ingress
diff --git a/content/rancher/v2.x/en/v1.6-migration/run-migration-tool/_index.md b/content/rancher/v2.x/en/v1.6-migration/run-migration-tool/_index.md index ebdebd5b9bd..fa810e81955 100644 --- a/content/rancher/v2.x/en/v1.6-migration/run-migration-tool/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/run-migration-tool/_index.md @@ -263,7 +263,7 @@ Use the following Rancher CLI commands to deploy your application using Rancher {{% /tab %}} {{% /tabs %}} -Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select ` > ` that contains your services. The imported manifests will display on the **Resources > Workloads** and on the tab at **Resources > Workloads > Service Discovery.** (In Rancher v2.x prior to v2.3.0, these are on the **Workloads** and **Service Discovery** tabs in the top navigation bar.) +Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select ` > ` that contains your services. The imported manifests will display on the **Resources > Workloads** and on the tab at **Resources > Workloads > Service Discovery.** (In Rancher v2.x before v2.3.0, these are on the **Workloads** and **Service Discovery** tabs in the top navigation bar.)
Imported Services
diff --git a/content/rancher/v2.x/en/v1.6-migration/schedule-workloads/_index.md b/content/rancher/v2.x/en/v1.6-migration/schedule-workloads/_index.md index e78fa280b0c..99f4df0d67f 100644 --- a/content/rancher/v2.x/en/v1.6-migration/schedule-workloads/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/schedule-workloads/_index.md @@ -87,7 +87,7 @@ Rancher schedules pods to the node you select if 1) there are compute resource a If you expose the workload using a NodePort that conflicts with another workload, the deployment gets created successfully, but no NodePort service is created. Therefore, the workload isn't exposed outside of the cluster. -After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node. +After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the project view, click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) Click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node. ![Pods Scheduled to Same Node]({{}}/img/rancher/scheduled-nodes.png) diff --git a/content/rke/latest/en/config-options/add-ons/_index.md b/content/rke/latest/en/config-options/add-ons/_index.md index 89695c786c3..a24a6b2d72a 100644 --- a/content/rke/latest/en/config-options/add-ons/_index.md +++ b/content/rke/latest/en/config-options/add-ons/_index.md @@ -16,7 +16,7 @@ There are a few things worth noting: * In addition to these pluggable add-ons, you can specify an add-on that you want deployed after the cluster deployment is complete. * As of v0.1.8, RKE will update an add-on if it is the same name. -* Prior to v0.1.8, update any add-ons by using `kubectl edit`. +* Before v0.1.8, update any add-ons by using `kubectl edit`. ## Critical and Non-Critical Add-ons diff --git a/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md b/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md index ad70ea165a4..62d29580145 100644 --- a/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md +++ b/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md @@ -6,7 +6,7 @@ weight: 262 By default, RKE deploys the NGINX ingress controller on all schedulable nodes. -> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but prior to v0.1.8, worker and controlplane nodes were considered schedulable nodes. +> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but before v0.1.8, worker and controlplane nodes were considered schedulable nodes. RKE will deploy the ingress controller as a DaemonSet with `hostnetwork: true`, so ports `80`, and `443` will be opened on each node where the controller is deployed. diff --git a/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md b/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md index 72808d38936..fb874b9b134 100644 --- a/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md +++ b/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md @@ -18,7 +18,7 @@ RKE only adds additional add-ons when using `rke up` multiple times. RKE does ** As of v0.1.8, RKE will update an add-on if it is the same name. -Prior to v0.1.8, update any add-ons by using `kubectl edit`. +Before v0.1.8, update any add-ons by using `kubectl edit`. ## In-line Add-ons diff --git a/content/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid/_index.md b/content/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid/_index.md index 15b8c0e8946..df0278d5088 100644 --- a/content/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid/_index.md +++ b/content/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid/_index.md @@ -32,4 +32,4 @@ $ govc vm.change -vm -e disk.enableUUID=TRUE In Rancher v2.0.4+, disk UUIDs are enabled in vSphere node templates by default. -If you are using Rancher prior to v2.0.4, refer to the [vSphere node template documentation.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/#disk-uuids) for details on how to enable a UUID with a Rancher node template. +If you are using Rancher before v2.0.4, refer to the [vSphere node template documentation.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/before-2.0.4/#disk-uuids) for details on how to enable a UUID with a Rancher node template. diff --git a/content/rke/latest/en/config-options/nodes/_index.md b/content/rke/latest/en/config-options/nodes/_index.md index 68c315a332d..fad9ee1409b 100644 --- a/content/rke/latest/en/config-options/nodes/_index.md +++ b/content/rke/latest/en/config-options/nodes/_index.md @@ -78,7 +78,7 @@ nodes: You can specify the list of roles that you want the node to be as part of the Kubernetes cluster. Three roles are supported: `controlplane`, `etcd` and `worker`. Node roles are not mutually exclusive. It's possible to assign any combination of roles to any node. It's also possible to change a node's role using the upgrade process. -> **Note:** Prior to v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes. +> **Note:** Before v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes. ### etcd diff --git a/content/rke/latest/en/config-options/private-registries/_index.md b/content/rke/latest/en/config-options/private-registries/_index.md index 2f448920312..1fe91b8b182 100644 --- a/content/rke/latest/en/config-options/private-registries/_index.md +++ b/content/rke/latest/en/config-options/private-registries/_index.md @@ -35,5 +35,5 @@ By default, all system images are being pulled from DockerHub. If you are on a s As of v0.1.10, you have to configure your private registry credentials, but you can specify this registry as a default registry so that all [system images]({{}}/rke/latest/en/config-options/system-images/) are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry. -Prior to v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name. +Before v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name. diff --git a/content/rke/latest/en/config-options/services/services-extras/_index.md b/content/rke/latest/en/config-options/services/services-extras/_index.md index 57f623800ab..8c86d64de56 100644 --- a/content/rke/latest/en/config-options/services/services-extras/_index.md +++ b/content/rke/latest/en/config-options/services/services-extras/_index.md @@ -11,13 +11,13 @@ For any of the Kubernetes services, you can update the `extra_args` to change th As of `v0.1.3`, using `extra_args` will add new arguments and **override** any existing defaults. For example, if you need to modify the default admission plugins list, you need to include the default list and edit it with your changes so all changes are included. -Prior to `v0.1.3`, using `extra_args` would only add new arguments to the list and there was no ability to change the default list. +Before `v0.1.3`, using `extra_args` would only add new arguments to the list and there was no ability to change the default list. All service defaults and parameters are defined per [`kubernetes_version`]({{}}/rke/latest/en/config-options/#kubernetes-version): - For RKE v0.3.0+, the service defaults and parameters are defined per [`kubernetes_version`]({{}}/rke/latest/en/config-options/#kubernetes-version). The service defaults are located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go). The default list of admissions plugins is the same for all Kubernetes versions and is located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go#L11). -- For RKE prior to v0.3.0, the service defaults and admission plugins are defined per [`kubernetes_version`]({{}}/rke/latest/en/config-options/#kubernetes-version) and located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). +- For RKE before v0.3.0, the service defaults and admission plugins are defined per [`kubernetes_version`]({{}}/rke/latest/en/config-options/#kubernetes-version) and located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). ```yaml services: diff --git a/content/rke/latest/en/config-options/system-images/_index.md b/content/rke/latest/en/config-options/system-images/_index.md index 041a99a186e..148168a5821 100644 --- a/content/rke/latest/en/config-options/system-images/_index.md +++ b/content/rke/latest/en/config-options/system-images/_index.md @@ -63,7 +63,7 @@ system_images: metrics_server: rancher/metrics-server-amd64:v0.3.1 ``` -Prior to `v0.1.6`, instead of using the `rancher/rke-tools` image, we used the following images: +Before `v0.1.6`, instead of using the `rancher/rke-tools` image, we used the following images: ```yaml system_images: diff --git a/content/rke/latest/en/etcd-snapshots/example-scenarios/_index.md b/content/rke/latest/en/etcd-snapshots/example-scenarios/_index.md index abf4f768cbc..3cae808ab71 100644 --- a/content/rke/latest/en/etcd-snapshots/example-scenarios/_index.md +++ b/content/rke/latest/en/etcd-snapshots/example-scenarios/_index.md @@ -100,18 +100,18 @@ nginx-65899c769f-qkhml 1/1 Running 0 17s ``` {{% /tab %}} -{{% tab "RKE prior to v0.2.0" %}} +{{% tab "RKE before v0.2.0" %}} This walkthrough will demonstrate how to restore an etcd cluster from a local snapshot with the following steps: -1. [Take a local snapshot of the cluster](#take-a-local-snapshot-of-the-cluster-rke-prior-to-v0.2.0) -1. [Store the snapshot externally](#store-the-snapshot-externally-rke-prior-to-v0.2.0) -1. [Simulate a node failure](#simulate-a-node-failure-rke-prior-to-v0.2.0) -1. [Remove the Kubernetes cluster and clean the nodes](#remove-the-kubernetes-cluster-and-clean-the-nodes-rke-prior-to-v0.2.0) -1. [Retrieve the backup and place it on a new node](#retrieve-the-backup-and-place-it-on-a-new-node-rke-prior-to-v0.2.0) -1. [Add a new etcd node to the Kubernetes cluster](#add-a-new-etcd-node-to-the-kubernetes-cluster-rke-prior-to-v0.2.0) -1. [Restore etcd on the new node from the backup](#restore-etcd-on-the-new-node-from-the-backup-rke-prior-to-v0.2.0) -1. [Restore Operations on the Cluster](#restore-operations-on-the-cluster-rke-prior-to-v0.2.0) +1. [Take a local snapshot of the cluster](#take-a-local-snapshot-of-the-cluster-rke-before-v0.2.0) +1. [Store the snapshot externally](#store-the-snapshot-externally-rke-before-v0.2.0) +1. [Simulate a node failure](#simulate-a-node-failure-rke-before-v0.2.0) +1. [Remove the Kubernetes cluster and clean the nodes](#remove-the-kubernetes-cluster-and-clean-the-nodes-rke-before-v0.2.0) +1. [Retrieve the backup and place it on a new node](#retrieve-the-backup-and-place-it-on-a-new-node-rke-before-v0.2.0) +1. [Add a new etcd node to the Kubernetes cluster](#add-a-new-etcd-node-to-the-kubernetes-cluster-rke-before-v0.2.0) +1. [Restore etcd on the new node from the backup](#restore-etcd-on-the-new-node-from-the-backup-rke-before-v0.2.0) +1. [Restore Operations on the Cluster](#restore-operations-on-the-cluster-rke-before-v0.2.0) ### Example Scenario of restoring from a Local Snapshot @@ -122,7 +122,7 @@ In this example, the Kubernetes cluster was deployed on two AWS nodes. | node1 | 10.0.0.1 | [controlplane, worker] | | node2 | 10.0.0.2 | [etcd] | - + ### 1. Take a Local Snapshot of the Cluster Back up the Kubernetes cluster by taking a local snapshot: @@ -131,7 +131,7 @@ Back up the Kubernetes cluster by taking a local snapshot: $ rke etcd snapshot-save --name snapshot.db --config cluster.yml ``` - + ### 2. Store the Snapshot Externally After taking the etcd snapshot on `node2`, we recommend saving this backup in a persistent place. One of the options is to save the backup and `pki.bundle.tar.gz` file on an S3 bucket or tape backup. @@ -145,7 +145,7 @@ root@node2:~# s3cmd \ s3://rke-etcd-backup/ ``` - + ### 3. Simulate a Node Failure To simulate the failure, let's power down `node2`. @@ -159,7 +159,7 @@ root@node2:~# poweroff | node1 | 10.0.0.1 | [controlplane, worker] | | ~~node2~~ | ~~10.0.0.2~~ | ~~[etcd]~~ | - + ### 4. Remove the Kubernetes Cluster and Clean the Nodes The following command removes your cluster and cleans the nodes so that the cluster can be restored without any conflicts: @@ -168,7 +168,7 @@ The following command removes your cluster and cleans the nodes so that the clus rke remove --config rancher-cluster.yml ``` - + ### 5. Retrieve the Backup and Place it On a New Node Before restoring etcd and running `rke up`, we need to retrieve the backup saved on S3 to a new node, e.g. `node3`. @@ -190,7 +190,7 @@ root@node3:~# s3cmd get \ > **Note:** If you had multiple etcd nodes, you would have to manually sync the snapshot and `pki.bundle.tar.gz` across all of the etcd nodes in the cluster. - + ### 6. Add a New etcd Node to the Kubernetes Cluster Before updating and restoring etcd, you will need to add the new node into the Kubernetes cluster with the `etcd` role. In the `cluster.yml`, comment out the old node and add in the new node. ` @@ -215,7 +215,7 @@ nodes: - etcd ``` - + ### 7. Restore etcd on the New Node from the Backup After the new node is added to the `cluster.yml`, run the `rke etcd snapshot-restore` command to launch `etcd` from the backup: @@ -226,7 +226,7 @@ $ rke etcd snapshot-restore --name snapshot.db --config cluster.yml The snapshot and `pki.bundle.tar.gz` file are expected to be saved at `/opt/rke/etcd-snapshots` on each etcd node. - + ### 8. Restore Operations on the Cluster Finally, we need to restore the operations on the cluster. We will make the Kubernetes API point to the new `etcd` by running `rke up` again using the new `cluster.yml`. diff --git a/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md b/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md index f1cfefeec90..ea37b69f44a 100644 --- a/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md +++ b/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md @@ -94,7 +94,7 @@ Below is an [example IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuid For details on giving an application access to S3, refer to the AWS documentation on [Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) {{% /tab %}} -{{% tab "RKE prior to v0.2.0" %}} +{{% tab "RKE before v0.2.0" %}} To save a snapshot of etcd from each etcd node in the cluster config file, run the `rke etcd snapshot-save` command. diff --git a/content/rke/latest/en/etcd-snapshots/recurring-snapshots/_index.md b/content/rke/latest/en/etcd-snapshots/recurring-snapshots/_index.md index 145f3466510..4e4cd3fdee6 100644 --- a/content/rke/latest/en/etcd-snapshots/recurring-snapshots/_index.md +++ b/content/rke/latest/en/etcd-snapshots/recurring-snapshots/_index.md @@ -30,8 +30,8 @@ time="2018-05-04T18:43:16Z" level=info msg="Created backup" name="2018-05-04T18: |Option|Description| S3 Specific | |---|---| --- | -|**interval_hours**| The duration in hours between recurring backups. This supercedes the `creation` option (which was used in RKE prior to v0.2.0) and will override it if both are specified.| | -|**retention**| The number of snapshots to retain before rotation. If the retention is configured in both `etcd.retention` (time period to keep snapshots in hours), which was required in RKE prior to v0.2.0, and at `etcd.backup_config.retention` (number of snapshots), the latter will be used. | | +|**interval_hours**| The duration in hours between recurring backups. This supercedes the `creation` option (which was used in RKE before v0.2.0) and will override it if both are specified.| | +|**retention**| The number of snapshots to retain before rotation. If the retention is configured in both `etcd.retention` (time period to keep snapshots in hours), which was required in RKE before v0.2.0, and at `etcd.backup_config.retention` (number of snapshots), the latter will be used. | | |**bucket_name**| S3 bucket name where backups will be stored| * | |**folder**| Folder inside S3 bucket where backups will be stored. This is optional. _Available as of v0.3.0_ | * | |**access_key**| S3 access key with permission to access the backup bucket.| * | @@ -96,11 +96,11 @@ services: ``` {{% /tab %}} -{{% tab "RKE prior to v0.2.0"%}} +{{% tab "RKE before v0.2.0"%}} To schedule automatic recurring etcd snapshots, you can enable the `etcd-snapshot` service with [extra configuration options](#options-for-the-local-etcd-snapshot-service). `etcd-snapshot` runs in a service container alongside the `etcd` container. By default, the `etcd-snapshot` service takes a snapshot for every node that has the `etcd` role and stores them to local disk in `/opt/rke/etcd-snapshots`. -RKE saves a backup of the certificates, i.e. a file named `pki.bundle.tar.gz`, in the same location. The snapshot and pki bundle file are required for the restore process in versions prior to v0.2.0. +RKE saves a backup of the certificates, i.e. a file named `pki.bundle.tar.gz`, in the same location. The snapshot and pki bundle file are required for the restore process in versions before v0.2.0. ### Snapshot Service Logging diff --git a/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md b/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md index 7291c3605bd..50f22e8692e 100644 --- a/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md +++ b/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md @@ -74,7 +74,7 @@ $ rke etcd snapshot-restore \ | `--ignore-docker-version` | [Disable Docker version check]({{}}/rke/latest/en/config-options/#supported-docker-versions) | {{% /tab %}} -{{% tab "RKE prior to v0.2.0"%}} +{{% tab "RKE before v0.2.0"%}} If there is a disaster with your Kubernetes cluster, you can use `rke etcd snapshot-restore` to recover your etcd. This command reverts etcd to a specific snapshot and should be run on an etcd node of the the specific cluster that has suffered the disaster. diff --git a/content/rke/latest/en/installation/_index.md b/content/rke/latest/en/installation/_index.md index 44adb3e9509..b96e58af4b7 100644 --- a/content/rke/latest/en/installation/_index.md +++ b/content/rke/latest/en/installation/_index.md @@ -178,7 +178,7 @@ The Kubernetes cluster state, which consists of the cluster configuration file ` As of v0.2.0, RKE creates a `.rkestate` file in the same directory that has the cluster configuration file `cluster.yml`. The `.rkestate` file contains the current state of the cluster including the RKE configuration and the certificates. It is required to keep this file in order to update the cluster or perform any operation on it through RKE. -Prior to v0.2.0, RKE saved the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates/changes the state and saves a new secret. +Before v0.2.0, RKE saved the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates/changes the state and saves a new secret. ## Interacting with your Kubernetes cluster diff --git a/content/rke/latest/en/upgrades/_index.md b/content/rke/latest/en/upgrades/_index.md index 5991e82ce6f..aae7e17b36f 100644 --- a/content/rke/latest/en/upgrades/_index.md +++ b/content/rke/latest/en/upgrades/_index.md @@ -46,7 +46,7 @@ This file is created in the same directory that has the cluster configuration fi It is required to keep the `cluster.rkestate` file to perform any operation on the cluster through RKE, or when upgrading a cluster last managed via RKE v0.2.0 or later. {{% /tab %}} -{{% tab "RKE prior to v0.2.0" %}} +{{% tab "RKE before v0.2.0" %}} Ensure that the `kube_config_cluster.yml` file is present in the working directory. RKE saves the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates or changes the state, and saves a new secret. The `kube_config_cluster.yml` file is required for upgrading a cluster last managed via RKE v0.1.x. @@ -103,7 +103,7 @@ In addition, if neither `kubernetes_version` nor `system_images` are configured As of v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, then RKE will error out. -Prior to v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used. +Before v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used. If you want to use a different version from the supported list, please use the [system images]({{}}/rke/latest/en/config-options/system-images/) option. @@ -113,7 +113,7 @@ In RKE, `kubernetes_version` is used to map the version of Kubernetes to the def For RKE v0.3.0+, the service defaults are located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go). -For RKE prior to v0.3.0, the service defaults are located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). Note: The version in the path of the service defaults file corresponds to a Rancher version. Therefore, for Rancher v2.1.x, [this file](https://github.com/rancher/types/blob/release/v2.1/apis/management.cattle.io/v3/k8s_defaults.go) should be used. +For RKE before v0.3.0, the service defaults are located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). Note: The version in the path of the service defaults file corresponds to a Rancher version. Therefore, for Rancher v2.1.x, [this file](https://github.com/rancher/types/blob/release/v2.1/apis/management.cattle.io/v3/k8s_defaults.go) should be used. ### Service Upgrades diff --git a/content/rke/latest/en/upgrades/how-upgrades-work/_index.md b/content/rke/latest/en/upgrades/how-upgrades-work/_index.md index 77ef6cfec1f..c7eb6fa7390 100644 --- a/content/rke/latest/en/upgrades/how-upgrades-work/_index.md +++ b/content/rke/latest/en/upgrades/how-upgrades-work/_index.md @@ -65,7 +65,7 @@ For more information on configuring the number of replicas for each addon, refer For an example showing how to configure the addons, refer to the [example cluster.yml.]({{}}/rke/latest/en/upgrades/configuring-strategy/#example-cluster-yml) {{% /tab %}} -{{% tab "RKE prior to v1.1.0" %}} +{{% tab "RKE before v1.1.0" %}} When a cluster is upgraded with `rke up`, using the default options, the following process is used: From 225c4baca85e17da29cf652762a81bd289f63762 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 19 Feb 2021 10:53:15 -0700 Subject: [PATCH 27/36] Update NO_PROXY variable to prevent error creating user #2725 --- .../behind-proxy/install-rancher/_index.md | 4 ++-- .../behind-proxy/launch-kubernetes/_index.md | 4 ++-- .../single-node-docker/proxy/_index.md | 3 ++- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md index 24ae1209d8f..cfec9eaa3ec 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md @@ -34,7 +34,7 @@ helm upgrade --install cert-manager jetstack/cert-manager \ --namespace cert-manager --version v0.15.2 \ --set http_proxy=http://${proxy_host} \ --set https_proxy=http://${proxy_host} \ - --set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local + --set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local ``` Now you should wait until cert-manager is finished starting up: @@ -65,7 +65,7 @@ helm upgrade --install rancher rancher-latest/rancher \ --namespace cattle-system \ --set hostname=rancher.example.com \ --set proxy=http://${proxy_host} - --set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local + --set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local ``` After waiting for the deployment to finish: diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md index 5c73de9021f..0393ea433b9 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md @@ -15,7 +15,7 @@ For convenience export the IP address and port of your proxy into an environment export proxy_host="10.0.0.5:8888" export HTTP_PROXY=http://${proxy_host} export HTTPS_PROXY=http://${proxy_host} -export NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 +export NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,cattle-system.svc ``` Next configure apt to use this proxy when installing packages. If you are not using Ubuntu, you have to adapt this step accordingly: @@ -47,7 +47,7 @@ cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf > /d [Service] Environment="HTTP_PROXY=http://${proxy_host}" Environment="HTTPS_PROXY=http://${proxy_host}" -Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" +Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16" EOF ``` diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/_index.md index f91bb0daede..d4d60519ec3 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/_index.md @@ -26,6 +26,7 @@ Passing environment variables to the Rancher container can be done using `-e KEY - `127.0.0.1` - `0.0.0.0` - `10.0.0.0/8` +- `cattle-system.svc` - `.svc` - `.cluster.local` @@ -36,7 +37,7 @@ docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -e HTTP_PROXY="http://192.168.10.1:3128" \ -e HTTPS_PROXY="http://192.168.10.1:3128" \ - -e NO_PROXY="localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.10.0/24,.svc,.cluster.local,example.com" \ + -e NO_PROXY="localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,cattle-system.svc,192.168.10.0/24,.svc,.cluster.local,example.com" \ --privileged \ rancher/rancher:latest ``` From c65b3fe62fc82a494e7b5b1b360425c491ea9715 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Mon, 22 Feb 2021 09:37:18 -0800 Subject: [PATCH 28/36] Links broken. Revert change to links --- .../vsphere/provisioning-vsphere-clusters/_index.md | 6 +++--- .../vsphere/vsphere-node-template-config/_index.md | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md index 079ef5e5e15..97642a2fe23 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md @@ -106,7 +106,7 @@ You can access your cluster after its state is updated to **Active.** Use Rancher to create a Kubernetes cluster in vSphere. -For Rancher versions before v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/before-2.0.4/#disk-uuids) to enable disk UUIDs. +For Rancher versions before v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/prior-to-2.0.4/#disk-uuids) to enable disk UUIDs. 1. From the **Clusters** page, click **Add Cluster**. 1. Choose **vSphere**. @@ -116,7 +116,7 @@ For Rancher versions before v2.0.4, when you create the cluster, you will also n 1. If you want to dynamically provision persistent storage or other infrastructure later, you will need to enable the vSphere cloud provider by modifying the cluster YAML file. For details, refer to [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) 1. Add one or more [node pools]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) to your cluster. Each node pool uses a node template to provision new nodes. To create a node template, click **Add Node Template** and complete the **vSphere Options** form. For help filling out the form, refer to the vSphere node template configuration reference. Refer to the newest version of the configuration reference that is less than or equal to your Rancher version: - [v2.0.4]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.0.4) - - [before v2.0.4]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/before-2.0.4) + - [before v2.0.4]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4) 1. Review your options to confirm they're correct. Then click **Create** to start provisioning the VMs and Kubernetes services. **Result:** @@ -142,4 +142,4 @@ After creating your cluster, you can access it through the Rancher UI. As a best - **Access your cluster with the kubectl CLI:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. - **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. -- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) \ No newline at end of file +- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md index b3d79629b4d..d660f823fca 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md @@ -13,4 +13,4 @@ The vSphere node templates in Rancher were updated in the following Rancher vers - [v2.2.0](./v2.2.0) - [v2.0.4](./v2.0.4) -For Rancher versions before v2.0.4, refer to [this version.](./before-2.0.4) \ No newline at end of file +For Rancher versions before v2.0.4, refer to [this version.](./prior-to-2.0.4) From 871c3a117eb04c7a95ddb44980c3cc2c8ed5d449 Mon Sep 17 00:00:00 2001 From: Adrian Goins Date: Mon, 22 Feb 2021 14:40:05 -0300 Subject: [PATCH 29/36] Update _index.md Remove 'Experimental' tag for embedded etcd. --- content/k3s/latest/en/installation/datastore/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/k3s/latest/en/installation/datastore/_index.md b/content/k3s/latest/en/installation/datastore/_index.md index ca047436b97..f7baacab835 100644 --- a/content/k3s/latest/en/installation/datastore/_index.md +++ b/content/k3s/latest/en/installation/datastore/_index.md @@ -94,6 +94,6 @@ K3S_DATASTORE_KEYFILE='/path/to/client.key' \ k3s server ``` -### Embedded etcd for HA (Experimental) +### Embedded etcd for HA -Please see [High Availability with Embedded DB (Experimental)]({{}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option. +Please see [High Availability with Embedded DB]({{}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option. From c18823e2c1c8436e1b2757f12d274826e066f262 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 23 Feb 2021 20:04:23 -0700 Subject: [PATCH 30/36] Fix note formatting --- .../v2.x/en/installation/install-rancher-on-k8s/_index.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md index 9d3176cfee2..4b67a9b4c07 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md @@ -16,8 +16,7 @@ Set up the Rancher server's local Kubernetes cluster. The cluster requirements depend on the Rancher version: -- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. -> **Note:** To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#4-choose-your-ssl-configuration). +- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. Note: To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#4-choose-your-ssl-configuration). - **In Rancher v2.4.x,** Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster. - **In Rancher before v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster. From f11d1b72d4190c85120866925bf2fb6ce2d73ebc Mon Sep 17 00:00:00 2001 From: Cameron Seader Date: Wed, 24 Feb 2021 15:26:30 -0700 Subject: [PATCH 31/36] Added SUSE/openSUSE Linux support --- content/rke/latest/en/os/_index.md | 72 ++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 1c44104e1fe..c9618f3430c 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -98,6 +98,78 @@ xt_tcpudp | ``` net.bridge.bridge-nf-call-iptables=1 ``` +### SUSE Linux Enterprise Server (SLES) / openSUSE +If you are using SUSE Linux Enterprise Server or openSUSE follow the instructions below. + +#### Using upstream Docker +If you are using upstream Docker, the package name is `docker-ce` or `docker-ee`. You can check the installed package by executing: + +``` +rpm -q docker-ce +``` + +When using the upstream Docker packages, please follow [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user). + +#### Using SUSE/openSUSE packaged Docker +If you are using the Docker package supplied by SUSE/openSUSE, the package name is `docker`. You can check the installed package by executing: + +``` +rpm -q docker +``` + +#### Adding the Software repository for docker +In SUSE Linux Enterprise Server 15 SP2 docker is found in the Containers module. +This module will need to be added before istalling docker. + +To list available modules you can run SUSEConnect to list the extensions and the activation command +``` +node:~ # SUSEConnect --list-extensions +AVAILABLE EXTENSIONS AND MODULES + + Basesystem Module 15 SP2 x86_64 (Activated) + Deactivate with: SUSEConnect -d -p sle-module-basesystem/15.2/x86_64 + + Containers Module 15 SP2 x86_64 + Activate with: SUSEConnect -p sle-module-containers/15.2/x86_64 +``` +Run this SUSEConnect command to activate the Containers module. +``` +node:~ # SUSEConnect -p sle-module-containers/15.2/x86_64 +Registering system to registration proxy https://rmt.seader.us + +Updating system details on https://rmt.seader.us ... + +Activating sle-module-containers 15.2 x86_64 ... +-> Adding service to system ... +-> Installing release package ... + +Successfully registered system +``` +In order to run docker cli commands with your user then you need to add this user to the `docker` group. +It is preferred not to use the root user for this. + +``` +usermod -aG docker +``` + +To verify that the user is correctly configured, log out of the node and login using SSH or your preferred method, and execute `docker ps`: + +``` +ssh user@node +user@node:~> docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +user@node:~> +``` +### openSUSE MicroOS/Kubic (Atomic) +Consult the project pages for openSUSE MicroOS and Kubic for installation +#### openSUSE MicroOS +Designed to host container workloads with automated administration & patching. Installing openSUSE MicroOS you get a quick, small environment for deploying Containers, or any other workload that benefits from Transactional Updates. As rolling release distribution the software is always up-to-date. +https://microos.opensuse.org +#### openSUSE Kubic +Based on MicroOS, but not a rolling release distribution. Designed with the same things in mind but also a Certified Kubernetes Distribution. +https://kubic.opensuse.org +Installation instructions: +https://kubic.opensuse.org/blog/2021-02-08-MicroOS-Kubic-Rancher-RKE/ ### Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS From fb2bb0c735336cdc2f8877effdb076f1b405a84f Mon Sep 17 00:00:00 2001 From: Cameron Seader Date: Wed, 24 Feb 2021 15:58:43 -0700 Subject: [PATCH 32/36] Updated TOC --- content/rke/latest/en/os/_index.md | 42 ++++++++++++++++++------------ 1 file changed, 26 insertions(+), 16 deletions(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index c9618f3430c..382ce44db12 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -5,23 +5,31 @@ weight: 5 **In this section:** - - [Operating System](#operating-system) - - [General Linux Requirements](#general-linux-requirements) - - [Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS](#red-hat-enterprise-linux-rhel-oracle-enterprise-linux-ol-centos) - - - [Using upstream Docker](#using-upstream-docker) - - [Using RHEL/CentOS packaged Docker](#using-rhel-centos-packaged-docker) - - [Notes about Atomic Nodes](#red-hat-atomic) - - - [OpenSSH version](#openssh-version) - - [Creating a Docker Group](#creating-a-docker-group) - - [Flatcar Container Linux](#flatcar-container-linux) + - [General Linux Requirements](#general-linux-requirements) + - [SUSE Linux Enterprise Server (SLES) / openSUSE](#suse-linux-enterprise-server-sles--opensuse) + - [Using upstream Docker](#using-upstream-docker) + - [Using SUSE/openSUSE packaged docker](#using-suseopensuse-packaged-docker) + - [Adding the Software repository for docker](#adding-the-software-repository-for-docker) + - [openSUSE MicroOS/Kubic (Atomic)](#opensuse-microoskubic-atomic) + - [openSUSE MicroOS](#opensuse-microos) + - [openSUSE Kubic](#opensuse-kubic) + - [Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS](#red-hat-enterprise-linux-rhel--oracle-linux-ol--centos) + - [Using upstream Docker](#using-upstream-docker-1) + - [Using RHEL/CentOS packaged Docker](#using-rhelcentos-packaged-docker) + - [Red Hat Atomic](#red-hat-atomic) + - [OpenSSH version](#openssh-version) + - [Creating a Docker Group](#creating-a-docker-group) + - [Flatcar Container Linux](#flatcar-container-linux) - [Software](#software) + - [OpenSSH](#openssh) + - [Kubernetes](#kubernetes) + - [Docker](#docker) + - [Installing Docker](#installing-docker) + - [Checking the Installed Docker Version](#checking-the-installed-docker-version) - [Ports](#ports) - - - [Opening port TCP/6443 using `iptables`](#opening-port-tcp-6443-using-iptables) - - [Opening port TCP/6443 using `firewalld`](#opening-port-tcp-6443-using-firewalld) + - [Opening port TCP/6443 using `iptables`](#opening-port-tcp6443-using-iptables) + - [Opening port TCP/6443 using `firewalld`](#opening-port-tcp6443-using-firewalld) - [SSH Server Configuration](#ssh-server-configuration) @@ -98,10 +106,12 @@ xt_tcpudp | ``` net.bridge.bridge-nf-call-iptables=1 ``` + ### SUSE Linux Enterprise Server (SLES) / openSUSE + If you are using SUSE Linux Enterprise Server or openSUSE follow the instructions below. -#### Using upstream Docker +#### Using upstream Docker If you are using upstream Docker, the package name is `docker-ce` or `docker-ee`. You can check the installed package by executing: ``` @@ -110,7 +120,7 @@ rpm -q docker-ce When using the upstream Docker packages, please follow [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user). -#### Using SUSE/openSUSE packaged Docker +#### Using SUSE/openSUSE packaged docker If you are using the Docker package supplied by SUSE/openSUSE, the package name is `docker`. You can check the installed package by executing: ``` From 17ead3ff7fa6946e8a43ca13f6a75dc447f086d6 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 24 Feb 2021 16:20:42 -0700 Subject: [PATCH 33/36] Capitalize Docker --- content/rke/latest/en/os/_index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 382ce44db12..da210b14422 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -8,9 +8,9 @@ weight: 5 - [Operating System](#operating-system) - [General Linux Requirements](#general-linux-requirements) - [SUSE Linux Enterprise Server (SLES) / openSUSE](#suse-linux-enterprise-server-sles--opensuse) - - [Using upstream Docker](#using-upstream-docker) - - [Using SUSE/openSUSE packaged docker](#using-suseopensuse-packaged-docker) - - [Adding the Software repository for docker](#adding-the-software-repository-for-docker) + - [Using Upstream Docker](#using-upstream-docker) + - [Using SUSE/openSUSE packaged Docker](#using-suseopensuse-packaged-docker) + - [Adding the Software Repository for Docker](#adding-the-software-repository-for-docker) - [openSUSE MicroOS/Kubic (Atomic)](#opensuse-microoskubic-atomic) - [openSUSE MicroOS](#opensuse-microos) - [openSUSE Kubic](#opensuse-kubic) From 738a7a615720cceead1b42aefc5fb87e3e6d84d3 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 26 Feb 2021 10:22:09 -0700 Subject: [PATCH 34/36] Remove 'experimental' from K3s embedded etcd docs --- content/k3s/latest/en/installation/datastore/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/k3s/latest/en/installation/datastore/_index.md b/content/k3s/latest/en/installation/datastore/_index.md index f7baacab835..059d73e16fe 100644 --- a/content/k3s/latest/en/installation/datastore/_index.md +++ b/content/k3s/latest/en/installation/datastore/_index.md @@ -7,7 +7,7 @@ The ability to run Kubernetes using a datastore other than etcd sets K3s apart f * If your team doesn't have expertise in operating etcd, you can choose an enterprise-grade SQL database like MySQL or PostgreSQL * If you need to run a simple, short-lived cluster in your CI/CD environment, you can use the embedded SQLite database -* If you wish to deploy Kubernetes on the edge and require a highly available solution but can't afford the operational overhead of managing a database at the edge, you can use K3s's embedded HA datastore built on top of embedded etcd (currently experimental) +* If you wish to deploy Kubernetes on the edge and require a highly available solution but can't afford the operational overhead of managing a database at the edge, you can use K3s's embedded HA datastore built on top of embedded etcd. K3s supports the following datastore options: @@ -16,7 +16,7 @@ K3s supports the following datastore options: * [MySQL](https://www.mysql.com/) (certified against version 5.7) * [MariaDB](https://mariadb.org/) (certified against version 10.3.20) * [etcd](https://etcd.io/) (certified against version 3.3.15) -* Embedded etcd for High Availability (experimental) +* Embedded etcd for High Availability ### External Datastore Configuration Parameters If you wish to use an external datastore such as PostgreSQL, MySQL, or etcd you must set the `datastore-endpoint` parameter so that K3s knows how to connect to it. You may also specify parameters to configure the authentication and encryption of the connection. The below table summarizes these parameters, which can be passed as either CLI flags or environment variables. From e336af74564999aba2c3e684084fb066869a565c Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 26 Feb 2021 10:25:52 -0700 Subject: [PATCH 35/36] Say K3s embedded etcd datastore is not experimental --- content/k3s/latest/en/installation/_index.md | 2 +- content/k3s/latest/en/installation/airgap/_index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/k3s/latest/en/installation/_index.md b/content/k3s/latest/en/installation/_index.md index b141bcce42b..91997c7a2a9 100644 --- a/content/k3s/latest/en/installation/_index.md +++ b/content/k3s/latest/en/installation/_index.md @@ -9,7 +9,7 @@ This section contains instructions for installing K3s in various environments. P [High Availability with an External DB]({{}}/k3s/latest/en/installation/ha/) details how to set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd. -[High Availability with Embedded DB (Experimental)]({{}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database. +[High Availability with Embedded DB]({{}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database. [Air-Gap Installation]({{}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet. diff --git a/content/k3s/latest/en/installation/airgap/_index.md b/content/k3s/latest/en/installation/airgap/_index.md index 93aed208fab..91d37c00830 100644 --- a/content/k3s/latest/en/installation/airgap/_index.md +++ b/content/k3s/latest/en/installation/airgap/_index.md @@ -69,7 +69,7 @@ INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetok {{% /tab %}} {{% tab "High Availability Configuration" %}} -Reference the [High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha) or [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded) guides. You will be tweaking install commands so you specify `INSTALL_K3S_SKIP_DOWNLOAD=true` and run your install script locally instead of via curl. You will also utilize `INSTALL_K3S_EXEC='args'` to supply any arguments to k3s. +Reference the [High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha) or [High Availability with Embedded DB]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded) guides. You will be tweaking install commands so you specify `INSTALL_K3S_SKIP_DOWNLOAD=true` and run your install script locally instead of via curl. You will also utilize `INSTALL_K3S_EXEC='args'` to supply any arguments to k3s. For example, step two of the High Availability with an External DB guide mentions the following: From c0e67713aec743cc019dac28586d681933b97e65 Mon Sep 17 00:00:00 2001 From: Brian Downs Date: Fri, 26 Feb 2021 13:22:35 -0700 Subject: [PATCH 36/36] Add K3s Hardening Guide and Self Assessment (#2882) * add k3s hardening guide and self assessment Signed-off-by: Brian Downs --- content/k3s/latest/en/security/_index.md | 9 + .../en/security/hardening_guide/_index.md | 544 ++++ .../en/security/self_assessment/_index.md | 2497 +++++++++++++++++ 3 files changed, 3050 insertions(+) create mode 100644 content/k3s/latest/en/security/_index.md create mode 100644 content/k3s/latest/en/security/hardening_guide/_index.md create mode 100644 content/k3s/latest/en/security/self_assessment/_index.md diff --git a/content/k3s/latest/en/security/_index.md b/content/k3s/latest/en/security/_index.md new file mode 100644 index 00000000000..7468e909504 --- /dev/null +++ b/content/k3s/latest/en/security/_index.md @@ -0,0 +1,9 @@ +--- +title: "Security" +weight: 90 +--- + +This section describes the methodology and means of securing a K3s cluster. It's broken into 2 sections. + +* [Hardening Guide](./hardening_guide/) +* [CIS Benchmark Self-Assessment Guide](./self_assessment/) diff --git a/content/k3s/latest/en/security/hardening_guide/_index.md b/content/k3s/latest/en/security/hardening_guide/_index.md new file mode 100644 index 00000000000..aa140f48ee3 --- /dev/null +++ b/content/k3s/latest/en/security/hardening_guide/_index.md @@ -0,0 +1,544 @@ +--- +title: "CIS Hardening Guide" +weight: 80 +--- + +This document provides prescriptive guidance for hardening a production installation of K3s. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). + +K3s has a number of security mitigations applied and turned on by default and will pass a number of the Kubernetes CIS controls without modification. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark: + +1. K3s will not modify the host operating system. Any host-level modifications will need to be done manually. +2. Certain CIS policy controls for PodSecurityPolicies and NetworkPolicies will restrict the functionality of this cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further detail in the sections below. + +The first section (1.1) of the CIS Benchmark concerns itself primarily with pod manifest permissions and ownership. K3s doesn't utilize these for the core components since everything is packaged into a single binary. + +## Host-level Requirements + +There are two areas of host-level requirements: kernel parameters and etcd process/directory configuration. These are outlined in this section. + +### Ensure `protect-kernel-defaults` is set + +This is a kubelet flag that will cause the kubelet to exit if the required kernel parameters are unset or are set to values that are different from the kubelet's defaults. + +> **Note:** `protect-kernel-defaults` is exposed as a top-level flag for K3s. + +#### Set kernel parameters + +Create a file called `/etc/sysctl.d/90-kubelet.conf` and add the snippet below. Then run `sysctl -p /etc/sysctl.d/90-kubelet.conf`. + +```bash +vm.panic_on_oom=0 +vm.overcommit_memory=1 +kernel.panic=10 +kernel.panic_on_oops=1 +``` + +## Kubernetes Runtime Requirements + +The runtime requirements to comply with the CIS Benchmark are centered around pod security (PSPs) and network policies. These are outlined in this section. K3s doesn't apply any default PSPs or network policies however K3s ships with a controller that is meant to apply a given set of network policies. By default, K3s runs with the "NodeRestriction" admission controller. To enable PSPs, add the following to the K3s start command: `--kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount"`. This will have the effect of maintaining the "NodeRestriction" plugin as well as enabling the "PodSecurityPolicy". + +### PodSecurityPolicies + +When PSPs are enabled, a policy can be applied to satisfy the necessary controls described in section 5.2 of the CIS Benchmark. + +Here's an example of a compliant PSP. + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: cis1.5-compliant-psp +spec: + privileged: false # CIS - 5.2.1 + allowPrivilegeEscalation: false # CIS - 5.2.5 + requiredDropCapabilities: # CIS - 5.2.7/8/9 + - ALL + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + hostNetwork: false # CIS - 5.2.4 + hostIPC: false # CIS - 5.2.3 + hostPID: false # CIS - 5.2.2 + runAsUser: + rule: 'MustRunAsNonRoot' # CIS - 5.2.6 + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +``` + +Before the above PSP to be effective, we need to create a couple ClusterRoles and ClusterRole. We also need to include a "system unrestricted policy" which is needed for system-level pods that require additional privileges. + +These can be combined with the PSP yaml above and NetworkPolicy yaml below into a single file and placed in the `/var/lib/rancher/k3s/server/manifests` directory. Below is an example of a `policy.yaml` file. + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: cis1.5-compliant-psp +spec: + privileged: false + allowPrivilegeEscalation: false + requiredDropCapabilities: + - ALL + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'MustRunAsNonRoot' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: psp:restricted + labels: + addonmanager.kubernetes.io/mode: EnsureExists +rules: +- apiGroups: ['extensions'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - cis1.5-compliant-psp +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: default:restricted + labels: + addonmanager.kubernetes.io/mode: EnsureExists +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted +subjects: +- kind: Group + name: system:authenticated + apiGroup: rbac.authorization.k8s.io +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-system +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-system +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: default +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: default +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-public +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-public +--- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: system-unrestricted-psp +spec: + allowPrivilegeEscalation: true + allowedCapabilities: + - '*' + fsGroup: + rule: RunAsAny + hostIPC: true + hostNetwork: true + hostPID: true + hostPorts: + - max: 65535 + min: 0 + privileged: true + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - '*' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: system-unrestricted-node-psp-rolebinding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system-unrestricted-psp-role +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:nodes +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: system-unrestricted-psp-role +rules: +- apiGroups: + - policy + resourceNames: + - system-unrestricted-psp + resources: + - podsecuritypolicies + verbs: + - use +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: system-unrestricted-svc-acct-psp-rolebinding + namespace: kube-system +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system-unrestricted-psp-role +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts +``` + +> **Note:** The Kubernetes critical additions such as CNI, DNS, and Ingress are ran as pods in the `kube-system` namespace. Therefore, this namespace will have a policy that is less restrictive so that these components can run properly. + +### NetworkPolicies + +> NOTE: K3s deploys kube-router for network policy enforcement. Support for this in K3s is currently experimental. + +CIS requires that all namespaces have a network policy applied that reasonably limits traffic into namespaces and pods. + +Here's an example of a compliant network policy. + +```yaml +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-system +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-system +``` + +> **Note:** Operators must manage network policies as normal for additional namespaces that are created. + +## Known Issues +The following are controls that K3s currently does not pass by default. Each gap will be explained, along with a note clarifying whether it can be passed through manual operator intervention, or if it will be addressed in a future release of K3s. + + +### Control 1.2.15 +Ensure that the admission control plugin `NamespaceLifecycle` is set. +
+Rationale +Setting admission control policy to NamespaceLifecycle ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects. + +This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.16 (mentioned above) +Ensure that the admission control plugin `PodSecurityPolicy` is set. +
+Rationale +A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions. + +This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.22 +Ensure that the `--audit-log-path` argument is set. +
+Rationale +Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system. Even though currently, Kubernetes provides only basic audit capabilities, it should be enabled. You can enable it by setting an appropriate audit log path. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.23 +Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate. +
+Rationale +Retaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. Set your audit log retention period to 30 days or as per your business requirements. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.24 +Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate. +
+Rationale +Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. For example, if you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.25 +Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate. +
+Rationale +Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.26 +Ensure that the `--request-timeout` argument is set as appropriate. +
+Rationale +Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user's connection speed. By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.27 +Ensure that the `--service-account-lookup` argument is set to true. +
+Rationale +If `--service-account-lookup` is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.33 +Ensure that the `--encryption-provider-config` argument is set as appropriate. +
+Rationale +Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the aescbc, kms and secretbox are likely to be appropriate options. +
+ +### Control 1.2.34 +Ensure that encryption providers are appropriately configured. +
+Rationale +`etcd` is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. + +This can be remediated by passing a valid configuration to `k3s` as outlined above. +
+ +### Control 1.3.1 +Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate. +
+Rationale +Garbage collection is important to ensure sufficient resource availability and avoiding degraded performance and availability. In the worst case, the system might crash or just be unusable for a long period of time. The current setting for garbage collection is 12,500 terminated pods which might be too high for your system to sustain. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 3.2.1 +Ensure that a minimal audit policy is created (Scored) +
+Rationale +Logging is an important detective control for all systems, to detect potential unauthorized access. + +This can be remediated by passing controls 1.2.22 - 1.2.25 and verifying their efficacy. +
+ + +### Control 4.2.7 +Ensure that the `--make-iptables-util-chains` argument is set to true. +
+Rationale +Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 5.1.5 +Ensure that default service accounts are not actively used. (Scored) +
+Rationale + +Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. + +Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. + +The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. +
+ +The remediation for this is to update the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace. + +For `default` service accounts in the built-in namespaces (`kube-system`, `kube-public`, `kube-node-lease`, and `default`), K3s does not automatically do this. You can manually update this field on these service accounts to pass the control. + +## Control Plane Execution and Arguments + +Listed below are the K3s control plane components and the arguments they're given at start, by default. Commented to their right is the CIS 1.5 control that they satisfy. + +```bash +kube-apiserver + --advertise-port=6443 + --allow-privileged=true + --anonymous-auth=false # 1.2.1 + --api-audiences=unknown + --authorization-mode=Node,RBAC + --bind-address=127.0.0.1 + --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs + --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt # 1.2.31 + --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # 1.2.17 + --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt # 1.2.32 + --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt # 1.2.29 + --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key # 1.2.29 + --etcd-servers=https://127.0.0.1:2379 + --insecure-port=0 # 1.2.19 + --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt + --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt + --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key + --profiling=false # 1.2.21 + --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt + --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key + --requestheader-allowed-names=system:auth-proxy + --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt + --requestheader-extra-headers-prefix=X-Remote-Extra- + --requestheader-group-headers=X-Remote-Group + --requestheader-username-headers=X-Remote-User + --secure-port=6444 # 1.2.20 + --service-account-issuer=k3s + --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key # 1.2.28 + --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key + --service-cluster-ip-range=10.43.0.0/16 + --storage-backend=etcd3 + --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt # 1.2.30 + --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key # 1.2.30 +``` + +```bash +kube-controller-manager + --address=127.0.0.1 + --allocate-node-cidrs=true + --bind-address=127.0.0.1 # 1.3.7 + --cluster-cidr=10.42.0.0/16 + --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt + --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key + --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig + --port=10252 + --profiling=false # 1.3.2 + --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt # 1.3.5 + --secure-port=0 + --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key # 1.3.4 + --use-service-account-credentials=true # 1.3.3 +``` + +```bash +kube-scheduler + --address=127.0.0.1 + --bind-address=127.0.0.1 # 1.4.2 + --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig + --port=10251 + --profiling=false # 1.4.1 + --secure-port=0 +``` + +```bash +kubelet + --address=0.0.0.0 + --anonymous-auth=false # 4.2.1 + --authentication-token-webhook=true + --authorization-mode=Webhook # 4.2.2 + --cgroup-driver=cgroupfs + --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt # 4.2.3 + --cloud-provider=external + --cluster-dns=10.43.0.10 + --cluster-domain=cluster.local + --cni-bin-dir=/var/lib/rancher/k3s/data/223e6420f8db0d8828a8f5ed3c44489bb8eb47aa71485404f8af8c462a29bea3/bin + --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d + --container-runtime-endpoint=/run/k3s/containerd/containerd.sock + --container-runtime=remote + --containerd=/run/k3s/containerd/containerd.sock + --eviction-hard=imagefs.available<5%,nodefs.available<5% + --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% + --fail-swap-on=false + --healthz-bind-address=127.0.0.1 + --hostname-override=hostname01 + --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig + --kubelet-cgroups=/systemd/system.slice + --node-labels= + --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests + --protect-kernel-defaults=true # 4.2.6 + --read-only-port=0 # 4.2.4 + --resolv-conf=/run/systemd/resolve/resolv.conf + --runtime-cgroups=/systemd/system.slice + --serialize-image-pulls=false + --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt # 4.2.10 + --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key # 4.2.10 +``` + +The command below is an example of how the outlined remediations can be applied. + +```bash +k3s server \ + --protect-kernel-defaults=true \ + --secrets-encryption=true \ + --kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log' \ + --kube-apiserver-arg='audit-log-maxage=30' \ + --kube-apiserver-arg='audit-log-maxbackup=10' \ + --kube-apiserver-arg='audit-log-maxsize=100' \ + --kube-apiserver-arg='request-timeout=300s' \ + --kube-apiserver-arg='service-account-lookup=true' \ + --kube-apiserver-arg='enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount' \ + --kube-controller-manager-arg='terminated-pod-gc-threshold=10' \ + --kube-controller-manager-arg='use-service-account-credentials=true' \ + --kubelet-arg='streaming-connection-idle-timeout=5m' \ + --kubelet-arg='make-iptables-util-chains=true' +``` + +## Conclusion + +If you have followed this guide, your K3s cluster will be configured to comply with the CIS Kubernetes Benchmark. You can review the [CIS Benchmark Self-Assessment Guide](../self_assessment/) to understand the expectations of each of the benchmarks and how you can do the same on your cluster. diff --git a/content/k3s/latest/en/security/self_assessment/_index.md b/content/k3s/latest/en/security/self_assessment/_index.md new file mode 100644 index 00000000000..013da6db076 --- /dev/null +++ b/content/k3s/latest/en/security/self_assessment/_index.md @@ -0,0 +1,2497 @@ +--- +title: "CIS Self Assessment Guide" +weight: 90 +--- + + +### CIS Kubernetes Benchmark v1.5 - K3s v1.17, v1.18, & v1.19 + +#### Overview + +This document is a companion to the K3s security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of K3s, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes benchmark. It is to be used by K3s operators, security teams, auditors, and decision-makers. + +This guide is specific to the **v1.17**, **v1.18**, and **v1.19** release line of K3s and the **v1.5.1** release of the CIS Kubernetes Benchmark. + +For more detail about each control, including more detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.5. You can download the benchmark after logging in to [CISecurity.org](https://www.cisecurity.org/benchmark/kubernetes/). + +#### Testing controls methodology + +Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide. + +Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing. + +These are the possible results for each control: + +- **Pass** - The K3s cluster under test passed the audit outlined in the benchmark. +- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section will explain why this is so. +- **Not Scored - Operator Dependent** - The control is not scored in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. + +This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary and will require you to adjust the "audit" commands to fit your scenario. + +### Controls + +--- +## 1 Master Node Security Configuration +### 1.1 Master Node Configuration Files + +#### 1.1.1 +Ensure that the API server pod specification file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The API server pod specification file controls various parameters that set the behavior of the API server. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.2 +Ensure that the API server pod specification file ownership is set to `root:root` (Scored) +
+Rationale +The API server pod specification file controls various parameters that set the behavior of the API server. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + + +#### 1.1.3 +Ensure that the controller manager pod specification file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The controller manager pod specification file controls various parameters that set the behavior of the Controller Manager on the master node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.4 +Ensure that the controller manager pod specification file ownership is set to `root:root` (Scored) +
+Rationale +The controller manager pod specification file controls various parameters that set the behavior of various components of the master node. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. +
+ +**Result:** Not Applicable + + +#### 1.1.5 +Ensure that the scheduler pod specification file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The scheduler pod specification file controls various parameters that set the behavior of the Scheduler service in the master node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.6 +Ensure that the scheduler pod specification file ownership is set to `root:root` (Scored) +
+Rationale +The scheduler pod specification file controls various parameters that set the behavior of the kube-scheduler service in the master node. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. +
+ +**Result:** Not Applicable + + +#### 1.1.7 +Ensure that the etcd pod specification file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The etcd pod specification file /var/lib/rancher/k3s/agent/pod-manifests/etcd.yaml controls various parameters that set the behavior of the etcd service in the master node. etcd is a highly- available key-value store which Kubernetes uses for persistent storage of all of its REST API object. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.8 +Ensure that the etcd pod specification file ownership is set to `root:root` (Scored) +
+Rationale +The etcd pod specification file /var/lib/rancher/k3s/agent/pod-manifests/etcd.yaml controls various parameters that set the behavior of the etcd service in the master node. etcd is a highly- available key-value store which Kubernetes uses for persistent storage of all of its REST API object. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. +
+ +**Result:** Not Applicable + + +#### 1.1.9 +Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Not Scored) +
+Rationale +Container Network Interface provides various networking options for overlay networking. You should consult their documentation and restrict their respective file permissions to maintain the integrity of those files. Those files should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.10 +Ensure that the Container Network Interface file ownership is set to root:root (Not Scored) +
+Rationale +Container Network Interface provides various networking options for overlay networking. You should consult their documentation and restrict their respective file permissions to maintain the integrity of those files. Those files should be owned by root:root. +
+ +**Result:** Not Applicable + + +#### 1.1.11 +Ensure that the etcd data directory permissions are set to 700 or more restrictive (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should not be readable or writable by any group members or the world. +
+ +**Result:** Pass + +**Audit:** +```bash +stat -c %a /var/lib/rancher/k3s/server/db/etcd +700 +``` + +**Remediation:** +K3s manages the etcd data directory and sets its permissions to 700. No manual remediation needed. (only relevant when Etcd is used for the data store) + + +#### 1.1.12 +Ensure that the etcd data directory ownership is set to `etcd:etcd` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should be owned by etcd:etcd. +
+ +**Result:** Not Applicable + + +#### 1.1.13 +Ensure that the `admin.conf` file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The admin.conf is the administrator kubeconfig file defining various settings for the administration of the cluster. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/admin.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. + + +#### 1.1.14 +Ensure that the `admin.conf` file ownership is set to `root:root` (Scored) +
+Rationale +The admin.conf file contains the admin credentials for the cluster. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/admin.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. + + +#### 1.1.15 +Ensure that the `scheduler.conf` file permissions are set to `644` or more restrictive (Scored) +
+Rationale + +The scheduler.conf file is the kubeconfig file for the Scheduler. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. + + +#### 1.1.16 +Ensure that the `scheduler.conf` file ownership is set to `root:root` (Scored) +
+Rationale +The scheduler.conf file is the kubeconfig file for the Scheduler. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. + + +#### 1.1.17 +Ensure that the `controller.kubeconfig` file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The controller.kubeconfig file is the kubeconfig file for the Scheduler. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/controller.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. + + +#### 1.1.18 +Ensure that the `controller.kubeconfig` file ownership is set to `root:root` (Scored) +
+Rationale +The controller.kubeconfig file is the kubeconfig file for the Scheduler. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/controller.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. + + +#### 1.1.19 +Ensure that the Kubernetes PKI directory and file ownership is set to `root:root` (Scored) +
+Rationale +Kubernetes makes use of a number of certificates as part of its operation. You should set the ownership of the directory containing the PKI information and all files in that directory to maintain their integrity. The directory and files should be owned by root:root. +
+ +**Result:** Pass + +**Audit:** +```bash +stat -c %U:%G /var/lib/rancher/k3s/server/tls +root:root +``` + +**Remediation:** +By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. + + +#### 1.1.20 +Ensure that the Kubernetes PKI certificate file permissions are set to `644` or more restrictive (Scored) +
+Rationale +Kubernetes makes use of a number of certificate files as part of the operation of its components. The permissions on these files should be set to 644 or more restrictive to protect their integrity. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %n\ %a /var/lib/rancher/k3s/server/tls/*.crt +``` + +Verify that the permissions are `644` or more restrictive. + +**Remediation:** +By default, K3s creates the files with the expected permissions of `644`. No manual remediation is needed. + + +#### 1.1.21 +Ensure that the Kubernetes PKI key file permissions are set to `600` (Scored) +
+Rationale +Kubernetes makes use of a number of key files as part of the operation of its components. The permissions on these files should be set to 600 to protect their integrity and confidentiality. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %n\ %a /var/lib/rancher/k3s/server/tls/*.key +``` + +Verify that the permissions are `600` or more restrictive. + +**Remediation:** +By default, K3s creates the files with the expected permissions of `600`. No manual remediation is needed. + + +### 1.2 API Server +This section contains recommendations relating to API server configuration flags + + +#### 1.2.1 +Ensure that the `--anonymous-auth` argument is set to false (Not Scored) + +
+Rationale +When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. These requests are then served by the API server. You should rely on authentication to authorize access and disallow anonymous requests. + +If you are using RBAC authorization, it is generally considered reasonable to allow anonymous access to the API Server for health checks and discovery purposes, and hence this recommendation is not scored. However, you should consider whether anonymous discovery is an acceptable risk for your purposes. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" +``` + +Verify that `--anonymous-auth=false` is present. + +**Remediation:** +By default, K3s kube-apiserver is configured to run with this flag and value. No manual remediation is needed. + +#### 1.2.2 +Ensure that the `--basic-auth-file` argument is not set (Scored) +
+Rationale +Basic authentication uses plaintext credentials for authentication. Currently, the basic authentication credentials last indefinitely, and the password cannot be changed without restarting the API server. The basic authentication is currently supported for convenience. Hence, basic authentication should not be used. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "basic-auth-file" +``` + +Verify that the `--basic-auth-file` argument does not exist. + +**Remediation:** +By default, K3s does not run with basic authentication enabled. No manual remediation is needed. + + +#### 1.2.3 +Ensure that the `--token-auth-file` parameter is not set (Scored) + +
+Rationale +The token-based authentication utilizes static tokens to authenticate requests to the apiserver. The tokens are stored in clear-text in a file on the apiserver, and cannot be revoked or rotated without restarting the apiserver. Hence, do not use static token-based authentication. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "token-auth-file" +``` + +Verify that the `--token-auth-file` argument does not exist. + +**Remediation:** +By default, K3s does not run with basic authentication enabled. No manual remediation is needed. + +#### 1.2.4 +Ensure that the `--kubelet-https` argument is set to true (Scored) + +
+Rationale +Connections from apiserver to kubelets could potentially carry sensitive data such as secrets and keys. It is thus important to use in-transit encryption for any communication between the apiserver and kubelets. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "kubelet-https" +``` + +Verify that the `--kubelet-https` argument does not exist. + +**Remediation:** +By default, K3s kube-apiserver doesn't run with the `--kubelet-https` parameter as it runs with TLS. No manual remediation is needed. + +#### 1.2.5 +Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate (Scored) + +
+Rationale +The apiserver, by default, does not authenticate itself to the kubelet's HTTPS endpoints. The requests from the apiserver are treated anonymously. You should set up certificate-based kubelet authentication to ensure that the apiserver authenticates itself to kubelets when submitting requests. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'kubelet-client-certificate|kubelet-client-key' +``` + +Verify that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments exist and they are set as appropriate. + +**Remediation:** +By default, K3s kube-apiserver is ran with these arguments for secure communication with kubelet. No manual remediation is needed. + + +#### 1.2.6 +Ensure that the `--kubelet-certificate-authority` argument is set as appropriate (Scored) +
+Rationale +The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default, the apiserver does not verify the kubelet’s serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "kubelet-certificate-authority" +``` + +Verify that the `--kubelet-certificate-authority` argument exists and is set as appropriate. + +**Remediation:** +By default, K3s kube-apiserver is ran with this argument for secure communication with kubelet. No manual remediation is needed. + + +#### 1.2.7 +Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) +
+Rationale +The API Server, can be configured to allow all requests. This mode should not be used on any production cluster. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +``` + +Verify that the argument value doesn't contain `AlwaysAllow`. + +**Remediation:** +By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. + + +#### 1.2.8 +Ensure that the `--authorization-mode` argument includes `Node` (Scored) +
+Rationale +The Node authorization mode only allows kubelets to read Secret, ConfigMap, PersistentVolume, and PersistentVolumeClaim objects associated with their nodes. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +``` + +Verify `Node` exists as a parameter to the argument. + +**Remediation:** +By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. + + +#### 1.2.9 +Ensure that the `--authorization-mode` argument includes `RBAC` (Scored) +
+Rationale +Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +``` + +Verify `RBAC` exists as a parameter to the argument. + +**Remediation:** +By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. + + +#### 1.2.10 +Ensure that the admission control plugin EventRateLimit is set (Not Scored) +
+Rationale +Using `EventRateLimit` admission control enforces a limit on the number of events that the API Server will accept in a given time slice. A misbehaving workload could overwhelm and DoS the API Server, making it unavailable. This particularly applies to a multi-tenant cluster, where there might be a small percentage of misbehaving tenants which could have a significant impact on the performance of the cluster overall. Hence, it is recommended to limit the rate of events that the API server will accept. + +Note: This is an Alpha feature in the Kubernetes 1.15 release. +
+ +**Result:** **Not Scored - Operator Dependent** + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes EventRateLimit. + +**Remediation:** +By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. +To configure this, follow the Kubernetes documentation and set the desired limits in a configuration file. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. + + +#### 1.2.11 +Ensure that the admission control plugin `AlwaysAdmit` is not set (Scored) +
+Rationale +Setting admission control plugin AlwaysAdmit allows all requests and do not filter any requests. + +The AlwaysAdmit admission controller was deprecated in Kubernetes v1.13. Its behavior was equivalent to turning off all admission controllers. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that if the `--enable-admission-plugins` argument is set, its value does not include `AlwaysAdmit`. + +**Remediation:** +By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. No manual remediation needed. + + +#### 1.2.12 +Ensure that the admission control plugin AlwaysPullImages is set (Not Scored) +
+Rationale +Setting admission control policy to `AlwaysPullImages` forces every new pod to pull the required images every time. In a multi-tenant cluster users can be assured that their private images can only be used by those who have the credentials to pull them. Without this admission control policy, once an image has been pulled to a node, any pod from any user can use it simply by knowing the image’s name, without any authorization check against the image ownership. When this plug-in is enabled, images are always pulled prior to starting containers, which means valid credentials are required. + +
+ +**Result:** **Not Scored - Operator Dependent** + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes `AlwaysPullImages`. + +**Remediation:** +By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. +To configure this, follow the Kubernetes documentation and set the desired limits in a configuration file. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. + +#### 1.2.13 +Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Not Scored) +
+Rationale +SecurityContextDeny can be used to provide a layer of security for clusters which do not have PodSecurityPolicies enabled. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes `SecurityContextDeny`, if `PodSecurityPolicy` is not included. + +**Remediation:** +K3s would need to have the `SecurityContextDeny` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=SecurityContextDeny` + + +#### 1.2.14 +Ensure that the admission control plugin `ServiceAccount` is set (Scored) +
+Rationale +When you create a pod, if you do not specify a service account, it is automatically assigned the `default` service account in the same namespace. You should create your own service account and let the API server manage its security tokens. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "ServiceAccount" +``` + +Verify that the `--disable-admission-plugins` argument is set to a value that does not includes `ServiceAccount`. + +**Remediation:** +By default, K3s does not use this argument. If there's a desire to use this argument, follow the documentation and create ServiceAccount objects as per your environment. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. + + +#### 1.2.15 +Ensure that the admission control plugin `NamespaceLifecycle` is set (Scored) +
+Rationale +Setting admission control policy to `NamespaceLifecycle` ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "disable-admission-plugins" +``` + +Verify that the `--disable-admission-plugins` argument is set to a value that does not include `NamespaceLifecycle`. + +**Remediation:** +By default, K3s does not use this argument. No manual remediation needed. + + +#### 1.2.16 +Ensure that the admission control plugin `PodSecurityPolicy` is set (Scored) +
+Rationale +A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The `PodSecurityPolicy` objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions. + +**Note:** When the PodSecurityPolicy admission plugin is in use, there needs to be at least one PodSecurityPolicy in place for ANY pods to be admitted. See section 1.7 for recommendations on PodSecurityPolicy settings. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes `PodSecurityPolicy`. + +**Remediation:** +K3s would need to have the `PodSecurityPolicy` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=PodSecurityPolicy`. + + +#### 1.2.17 +Ensure that the admission control plugin `NodeRestriction` is set (Scored) +
+Rationale +Using the `NodeRestriction` plug-in ensures that the kubelet is restricted to the `Node` and `Pod` objects that it could modify as defined. Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node. + +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes `NodeRestriction`. + +**Remediation:** +K3s would need to have the `NodeRestriction` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=NodeRestriction`. + + +#### 1.2.18 +Ensure that the `--insecure-bind-address` argument is not set (Scored) +
+Rationale +If you bind the apiserver to an insecure address, basically anyone who could connect to it over the insecure port, would have unauthenticated and unencrypted access to your master node. The apiserver doesn't do any authentication checking for insecure binds and traffic to the Insecure API port is not encrpyted, allowing attackers to potentially read sensitive data in transit. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "insecure-bind-address" +``` + +Verify that the `--insecure-bind-address` argument does not exist. + +**Remediation:** +By default, K3s explicitly excludes the use of the `--insecure-bind-address` parameter. No manual remediation is needed. + + +#### 1.2.19 +Ensure that the `--insecure-port` argument is set to `0` (Scored) +
+Rationale +Setting up the apiserver to serve on an insecure port would allow unauthenticated and unencrypted access to your master node. This would allow attackers who could access this port, to easily take control of the cluster. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "insecure-port" +``` + +Verify that the `--insecure-port` argument is set to `0`. + +**Remediation:** +By default, K3s starts the kube-apiserver process with this argument's parameter set to `0`. No manual remediation is needed. + + +#### 1.2.20 +Ensure that the `--secure-port` argument is not set to `0` (Scored) +
+Rationale +The secure port is used to serve https with authentication and authorization. If you disable it, no https traffic is served and all traffic is served unencrypted. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "secure-port" +``` + +Verify that the `--secure-port` argument is either not set or is set to an integer value between 1 and 65535. + +**Remediation:** +By default, K3s sets the parameter of 6444 for the `--secure-port` argument. No manual remediation is needed. + + +#### 1.2.21 +Ensure that the `--profiling` argument is set to `false` (Scored) +
+Rationale +Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "profiling" +``` + +Verify that the `--profiling` argument is set to false. + +**Remediation:** +By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. + + +#### 1.2.22 +Ensure that the `--audit-log-path` argument is set (Scored) +
+Rationale +Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system. Even though currently, Kubernetes provides only basic audit capabilities, it should be enabled. You can enable it by setting an appropriate audit log path. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-path" +``` + +Verify that the `--audit-log-path` argument is set as appropriate. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-path=/path/to/log/file'`. + + +#### 1.2.23 +Ensure that the `--audit-log-maxage` argument is set to `30` or as appropriate (Scored) +
+Rationale +Retaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. Set your audit log retention period to 30 days or as per your business requirements. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxage" +``` + +Verify that the `--audit-log-maxage` argument is set to `30` or as appropriate. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxage=30'`. + + +#### 1.2.24 +Ensure that the `--audit-log-maxbackup` argument is set to `10` or as appropriate (Scored) +
+Rationale +Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. For example, if you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxbackup" +``` + +Verify that the `--audit-log-maxbackup` argument is set to `10` or as appropriate. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxbackup=10'`. + + +#### 1.2.25 +Ensure that the `--audit-log-maxsize` argument is set to `100` or as appropriate (Scored) +
+Rationale +Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxsize" +``` + +Verify that the `--audit-log-maxsize` argument is set to `100` or as appropriate. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxsize=100'`. + + +#### 1.2.26 +Ensure that the `--request-timeout` argument is set as appropriate (Scored) +
+Rationale +Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user's connection speed. By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "request-timeout" +``` + +Verify that the `--request-timeout` argument is either not set or set to an appropriate value. + +**Remediation:** +By default, K3s does not set the `--request-timeout` argument. No manual remediation needed. + + +#### 1.2.27 +Ensure that the `--service-account-lookup` argument is set to `true` (Scored) +
+Rationale +If `--service-account-lookup` is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "service-account-lookup" +``` + +Verify that if the `--service-account-lookup` argument exists it is set to `true`. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='service-account-lookup=true'`. + + +#### 1.2.28 +Ensure that the `--service-account-key-file` argument is set as appropriate (Scored) +
+Rationale +By default, if no `--service-account-key-file` is specified to the apiserver, it uses the private key from the TLS serving certificate to verify service account tokens. To ensure that the keys for service account tokens could be rotated as needed, a separate public/private key pair should be used for signing service account tokens. Hence, the public key should be specified to the apiserver with `--service-account-key-file`. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "service-account-key-file" +``` + +Verify that the `--service-account-key-file` argument exists and is set as appropriate. + +**Remediation:** +By default, K3s sets the `--service-account-key-file` explicitly. No manual remediation needed. + + +#### 1.2.29 +Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API server to identify itself to the etcd server using a client certificate and key. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'etcd-certfile|etcd-keyfile' +``` + +Verify that the `--etcd-certfile` and `--etcd-keyfile` arguments exist and they are set as appropriate. + +**Remediation:** +By default, K3s sets the `--etcd-certfile` and `--etcd-keyfile` arguments explicitly. No manual remediation needed. + + +#### 1.2.30 +Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) +
+Rationale +API server communication contains sensitive parameters that should remain encrypted in transit. Configure the API server to serve only HTTPS traffic. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'tls-cert-file|tls-private-key-file' +``` + +Verify that the `--tls-cert-file` and `--tls-private-key-file` arguments exist and they are set as appropriate. + +**Remediation:** +By default, K3s sets the `--tls-cert-file` and `--tls-private-key-file` arguments explicitly. No manual remediation needed. + + +#### 1.2.31 +Ensure that the `--client-ca-file` argument is set as appropriate (Scored) +
+Rationale +API server communication contains sensitive parameters that should remain encrypted in transit. Configure the API server to serve only HTTPS traffic. If `--client-ca-file` argument is set, any request presenting a client certificate signed by one of the authorities in the `client-ca-file` is authenticated with an identity corresponding to the CommonName of the client certificate. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" +``` + +Verify that the `--client-ca-file` argument exists and it is set as appropriate. + +**Remediation:** +By default, K3s sets the `--client-ca-file` argument explicitly. No manual remediation needed. + + +#### 1.2.32 +Ensure that the `--etcd-cafile` argument is set as appropriate (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API server to identify itself to the etcd server using a SSL Certificate Authority file. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "etcd-cafile" +``` + +Verify that the `--etcd-cafile` argument exists and it is set as appropriate. + +**Remediation:** +By default, K3s sets the `--etcd-cafile` argument explicitly. No manual remediation needed. + + +#### 1.2.33 +Ensure that the `--encryption-provider-config` argument is set as appropriate (Scored) +
+Rationale +etcd is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "encryption-provider-config" +``` + +Verify that the `--encryption-provider-config` argument is set to a EncryptionConfigfile. Additionally, ensure that the `EncryptionConfigfile` has all the desired resources covered especially any secrets. + +**Remediation:** +K3s server needs to be ran with the follow, `--kube-apiserver-arg='encryption-provider-config=/path/to/encryption_config'`. This can be done by running k3s with the `--secrets-encryptiuon` argument which will configure the encryption provider. + + +#### 1.2.34 +Ensure that encryption providers are appropriately configured (Scored) +
+Rationale +Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the `aescbc`, `kms` and `secretbox` are likely to be appropriate options. +
+ +**Result:** Pass + +**Remediation:** +Follow the Kubernetes documentation and configure a `EncryptionConfig` file. +In this file, choose **aescbc**, **kms** or **secretbox** as the encryption provider. + +**Audit:** +Run the below command on the master node. + +```bash +grep aescbc /path/to/encryption-config.json +``` + +Run the below command on the master node. + +Verify that aescbc is set as the encryption provider for all the desired resources. + +**Remediation** +K3s server needs to be run with the following, `--secrets-encryption=true`, and verify that one of the allowed encryption providers is present. + + +#### 1.2.35 +Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Not Scored) + +
+Rationale +TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. By default Kubernetes supports a number of TLS cipher suites including some that have security concerns, weakening the protection provided. +
+ +**Result:** **Not Scored - Operator Dependent** + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "tls-cipher-suites" +``` + +Verify that the `--tls-cipher-suites` argument is set as outlined in the remediation procedure below. + +**Remediation:** +By default, K3s explicitly doesn't set this flag. No manual remediation needed. + + +### 1.3 Controller Manager + +#### 1.3.1 +Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate (Not Scored) +
+Rationale +Garbage collection is important to ensure sufficient resource availability and avoiding degraded performance and availability. In the worst case, the system might crash or just be unusable for a long period of time. The current setting for garbage collection is 12,500 terminated pods which might be too high for your system to sustain. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection. +
+ +**Result:** **Not Scored - Operator Dependent** + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "terminated-pod-gc-threshold +``` + +Verify that the `--terminated-pod-gc-threshold` argument is set as appropriate. + +**Remediation:** +K3s server needs to be run with the following, `--kube-controller-manager-arg='terminated-pod-gc-threshold=10`. + + +#### 1.3.2 +Ensure that the `--profiling` argument is set to false (Scored) +
+Rationale +Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "profiling" +``` + +Verify that the `--profiling` argument is set to false. + +**Remediation:** +By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. + + +#### 1.3.3 +Ensure that the `--use-service-account-credentials` argument is set to `true` (Scored) +
+Rationale +The controller manager creates a service account per controller in the `kube-system` namespace, generates a credential for it, and builds a dedicated API client with that service account credential for each controller loop to use. Setting the `--use-service-account-credentials` to `true` runs each control loop within the controller manager using a separate service account credential. When used in combination with RBAC, this ensures that the control loops run with the minimum permissions required to perform their intended tasks. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "use-service-account-credentials" +``` + +Verify that the `--use-service-account-credentials` argument is set to true. + +**Remediation:** +K3s server needs to be run with the following, `--kube-controller-manager-arg='use-service-account-credentials=true'` + + +#### 1.3.4 +Ensure that the `--service-account-private-key-file` argument is set as appropriate (Scored) +
+Rationale +To ensure that keys for service account tokens can be rotated as needed, a separate public/private key pair should be used for signing service account tokens. The private key should be specified to the controller manager with `--service-account-private-key-file` as appropriate. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "service-account-private-key-file" +``` + +Verify that the `--service-account-private-key-file` argument is set as appropriate. + +**Remediation:** +By default, K3s sets the `--service-account-private-key-file` argument with the service account key file. No manual remediation needed. + + +#### 1.3.5 +Ensure that the `--root-ca-file` argument is set as appropriate (Scored) +
+Rationale +Processes running within pods that need to contact the API server must verify the API server's serving certificate. Failing to do so could be a subject to man-in-the-middle attacks. + +Providing the root certificate for the API server's serving certificate to the controller manager with the `--root-ca-file` argument allows the controller manager to inject the trusted bundle into pods so that they can verify TLS connections to the API server. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "root-ca-file" +``` + +Verify that the `--root-ca-file` argument exists and is set to a certificate bundle file containing the root certificate for the API server's serving certificate + +**Remediation:** +By default, K3s sets the `--root-ca-file` argument with the root ca file. No manual remediation needed. + + +#### 1.3.6 +Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) +
+Rationale +`RotateKubeletServerCertificate` causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. This automated periodic rotation ensures that there are no downtimes due to expired certificates and thus addressing availability in the CIA security triad. + +Note: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "RotateKubeletServerCertificate" +``` + +Verify that RotateKubeletServerCertificateargument exists and is set to true. + +**Remediation:** +By default, K3s implements its own logic for certificate generation and rotation. + + +#### 1.3.7 +Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) +
+Rationale +The Controller Manager API service which runs on port 10252/TCP by default is used for health and metrics information and is available without authentication or encryption. As such it should only be bound to a localhost interface, to minimize the cluster's attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "bind-address" +``` + +Verify that the `--bind-address` argument is set to 127.0.0.1. + +**Remediation:** +By default, K3s sets the `--bind-address` argument to `127.0.0.1`. No manual remediation needed. + + +### 1.4 Scheduler +This section contains recommendations relating to Scheduler configuration flags + + +#### 1.4.1 +Ensure that the `--profiling` argument is set to `false` (Scored) +
+Rationale +Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-scheduler" | tail -n1 | grep "profiling" +``` + +Verify that the `--profiling` argument is set to false. + +**Remediation:** +By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. + + +#### 1.4.2 +Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) +
+Rationale + +The Scheduler API service which runs on port 10251/TCP by default is used for health and metrics information and is available without authentication or encryption. As such it should only be bound to a localhost interface, to minimize the cluster's attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-scheduler" | tail -n1 | grep "bind-address" +``` + +Verify that the `--bind-address` argument is set to 127.0.0.1. + +**Remediation:** +By default, K3s sets the `--bind-address` argument to `127.0.0.1`. No manual remediation needed. + + +## 2 Etcd Node Configuration +This section covers recommendations for etcd configuration. + +#### 2.1 +Ensure that the `cert-file` and `key-file` fields are set as appropriate (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted in transit. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +grep -E 'cert-file|key-file' /var/lib/rancher/k3s/server/db/etcd/config +``` + +Verify that the `cert-file` and the `key-file` fields are set as appropriate. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Server and peer cert and key files are specified. No manual remediation needed. + + +#### 2.2 +Ensure that the `client-cert-auth` field is set to `true` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +grep 'client-cert-auth' /var/lib/rancher/k3s/server/db/etcd/config +``` + +Verify that the `client-cert-auth` field is set to true. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. `client-cert-auth` is set to true. No manual remediation needed. + + +#### 2.3 +Ensure that the `auto-tls` field is not set to `true` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s starts Etcd without this flag. It is set to `false` by default. + + +#### 2.4 +Ensure that the `peer-cert-file` and `peer-key-file` fields are set as appropriate (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted in transit and also amongst peers in the etcd clusters. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s starts Etcd with a config file found here, `/var/lib/rancher/k3s/server/db/etcd/config`. The config file contains `peer-transport-security:` which has fields that have the peer cert and peer key files. + + +#### 2.5 +Ensure that the `client-cert-auth` field is set to `true` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +grep 'client-cert-auth' /var/lib/rancher/k3s/server/db/etcd/config +``` + +Verify that the `client-cert-auth` field in the peer section is set to true. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Within the file, the `client-cert-auth` field is set. No manual remediation needed. + + +#### 2.6 +Ensure that the `peer-auto-tls` field is not set to `true` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster. Hence, do not use self- signed certificates for authentication. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config +``` + +Verify that if the `peer-auto-tls` field does not exist. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Within the file, it does not contain the `peer-auto-tls` field. No manual remediation needed. + + +#### 2.7 +Ensure that a unique Certificate Authority is used for etcd (Not Scored) +
+Rationale +etcd is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. Its access should be restricted to specifically designated clients and peers only. + +Authentication to etcd is based on whether the certificate presented was issued by a trusted certificate authority. There is no checking of certificate attributes such as common name or subject alternative name. As such, if any attackers were able to gain access to any certificate issued by the trusted certificate authority, they would be able to gain full access to the etcd database. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +# To find the ca file used by etcd: +grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config +# To find the kube-apiserver process: +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 +``` + +Verify that the file referenced by the `client-ca-file` flag in the apiserver process is different from the file referenced by the `trusted-ca-file` parameter in the etcd configuration file. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config` and the `trusted-ca-file` parameters in it are set to unique values specific to etcd. No manual remediation needed. + + + +## 3 Control Plane Configuration + + +### 3.1 Authentication and Authorization + + +#### 3.1.1 +Client certificate authentication should not be used for users (Not Scored) +
+Rationale +With any authentication mechanism the ability to revoke credentials if they are compromised or no longer required, is a key control. Kubernetes client certificate authentication does not allow for this due to a lack of support for certificate revocation. +
+ +**Result:** Not Scored - Operator Dependent + +**Audit:** +Review user access to the cluster and ensure that users are not making use of Kubernetes client certificate authentication. + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of client certificates. + +### 3.2 Logging + + +#### 3.2.1 +Ensure that a minimal audit policy is created (Scored) +
+Rationale +Logging is an important detective control for all systems, to detect potential unauthorized access. +
+ +**Result:** Does not pass. See the [Hardening Guide](../hardening_guide/) for details. + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-policy-file" +``` + +Verify that the `--audit-policy-file` is set. Review the contents of the file specified and ensure that it contains a valid audit policy. + +**Remediation:** +Create an audit policy file for your cluster and pass it to k3s. e.g. `--kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log'` + + +#### 3.2.2 +Ensure that the audit policy covers key security concerns (Not Scored) +
+Rationale +Security audit logs should cover access and modification of key resources in the cluster, to enable them to form an effective part of a security environment. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** + + +## 4 Worker Node Security Configuration + + +### 4.1 Worker Node Configuration Files + + +#### 4.1.1 +Ensure that the kubelet service file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The `kubelet` service file controls various parameters that set the behavior of the kubelet service in the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s doesn’t launch the kubelet as a service. It is launched and managed by the K3s supervisor process. All configuration is passed to it as command line arguments at run time. + + +#### 4.1.2 +Ensure that the kubelet service file ownership is set to `root:root` (Scored) +
+Rationale +The `kubelet` service file controls various parameters that set the behavior of the kubelet service in the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s doesn’t launch the kubelet as a service. It is launched and managed by the K3s supervisor process. All configuration is passed to it as command line arguments at run time. + + +#### 4.1.3 +Ensure that the proxy kubeconfig file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The `kube-proxy` kubeconfig file controls various parameters of the `kube-proxy` service in the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. + +It is possible to run `kube-proxy` with the kubeconfig parameters configured as a Kubernetes ConfigMap instead of a file. In this case, there is no proxy kubeconfig file. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the worker node. + +```bash +stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +644 +``` + +Verify that if a file is specified and it exists, the permissions are 644 or more restrictive. + +**Remediation:** +K3s runs `kube-proxy` in process and does not use a config file. + + +#### 4.1.4 +Ensure that the proxy kubeconfig file ownership is set to `root:root` (Scored) +
+Rationale +The kubeconfig file for `kube-proxy` controls various parameters for the `kube-proxy` service in the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +root:root +``` + +Verify that if a file is specified and it exists, the permissions are 644 or more restrictive. + +**Remediation:** +K3s runs `kube-proxy` in process and does not use a config file. + + +#### 4.1.5 +Ensure that the kubelet.conf file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The `kubelet.conf` file is the kubeconfig file for the node, and controls various parameters that set the behavior and identity of the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the worker node. + +```bash +stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig +644 +``` + +**Remediation:** +By default, K3s creates `kubelet.kubeconfig` with `644` permissions. No manual remediation needed. + +#### 4.1.6 +Ensure that the kubelet.conf file ownership is set to `root:root` (Scored) +
+Rationale +The `kubelet.conf` file is the kubeconfig file for the node, and controls various parameters that set the behavior and identity of the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig +root:root +``` + +**Remediation:** +By default, K3s creates `kubelet.kubeconfig` with `root:root` ownership. No manual remediation needed. + + +#### 4.1.7 +Ensure that the certificate authorities file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The certificate authorities file controls the authorities used to validate API requests. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt +644 +``` + +Verify that the permissions are 644. + +**Remediation:** +By default, K3s creates `/var/lib/rancher/k3s/server/tls/server-ca.crt` with `644` permissions. + + +#### 4.1.8 +Ensure that the client certificate authorities file ownership is set to `root:root` (Scored) +
+Rationale +The certificate authorities file controls the authorities used to validate API requests. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt +root:root +``` + +**Remediation:** +By default, K3s creates `/var/lib/rancher/k3s/server/tls/client-ca.crt` with `root:root` ownership. + + +#### 4.1.9 +Ensure that the kubelet configuration file has permissions set to `644` or more restrictive (Scored) +
+Rationale +The kubelet reads various parameters, including security settings, from a config file specified by the `--config` argument. If this file is specified you should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s doesn’t require or maintain a configuration file for the kubelet process. All configuration is passed to it as command line arguments at run time. + + +#### 4.1.10 +Ensure that the kubelet configuration file ownership is set to `root:root` (Scored) +
+Rationale +The kubelet reads various parameters, including security settings, from a config file specified by the `--config` argument. If this file is specified you should restrict its file permissions to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s doesn’t require or maintain a configuration file for the kubelet process. All configuration is passed to it as command line arguments at run time. + + +### 4.2 Kubelet +This section contains recommendations for kubelet configuration. + + +#### 4.2.1 +Ensure that the `--anonymous-auth` argument is set to false (Scored) +
+Rationale +When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. These requests are then served by the Kubelet server. You should rely on authentication to authorize access and disallow anonymous requests. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" +``` + +Verify that the value for `--anonymous-auth` is false. + +**Remediation:** +By default, K3s starts kubelet with `--anonymous-auth` set to false. No manual remediation needed. + +#### 4.2.2 +Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) +
+Rationale +Kubelets, by default, allow all authenticated requests (even anonymous ones) without needing explicit authorization checks from the apiserver. You should restrict this behavior and only allow explicitly authorized requests. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +``` + +Verify that `AlwaysAllow` is not present. + +**Remediation:** +K3s starts kubelet with `Webhook` as the value for the `--authorization-mode` argument. No manual remediation needed. + + +#### 4.2.3 +Ensure that the `--client-ca-file` argument is set as appropriate (Scored) +
+Rationale +The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default, the apiserver does not verify the kubelet’s serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks. Enabling Kubelet certificate authentication ensures that the apiserver could authenticate the Kubelet before submitting any requests. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" +``` + +Verify that the `--client-ca-file` argument has a ca file associated. + +**Remediation:** +By default, K3s starts the kubelet process with the `--client-ca-file`. No manual remediation needed. + + +#### 4.2.4 +Ensure that the `--read-only-port` argument is set to `0` (Scored) +
+Rationale +The Kubelet process provides a read-only API in addition to the main Kubelet API. Unauthenticated access is provided to this read-only API which could possibly retrieve potentially sensitive information about the cluster. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "read-only-port" +``` +Verify that the `--read-only-port` argument is set to 0. + +**Remediation:** +By default, K3s starts the kubelet process with the `--read-only-port` argument set to `0`. + + +#### 4.2.5 +Ensure that the `--streaming-connection-idle-timeout` argument is not set to `0` (Scored) +
+Rationale +Setting idle timeouts ensures that you are protected against Denial-of-Service attacks, inactive connections and running out of ephemeral ports. + +**Note:** By default, `--streaming-connection-idle-timeout` is set to 4 hours which might be too high for your environment. Setting this as appropriate would additionally ensure that such streaming connections are timed out after serving legitimate use cases. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "streaming-connection-idle-timeout" +``` + +Verify that there's nothing returned. + +**Remediation:** +By default, K3s does not set `--streaming-connection-idle-timeout` when starting kubelet. + + +#### 4.2.6 +Ensure that the `--protect-kernel-defaults` argument is set to `true` (Scored) +
+Rationale +Kernel parameters are usually tuned and hardened by the system administrators before putting the systems into production. These parameters protect the kernel and the system. Your kubelet kernel defaults that rely on such parameters should be appropriately set to match the desired secured system state. Ignoring this could potentially lead to running pods with undesired kernel behavior. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "protect-kernel-defaults" +``` + +**Remediation:** +K3s server needs to be started with the following, `--protect-kernel-defaults=true`. + + +#### 4.2.7 +Ensure that the `--make-iptables-util-chains` argument is set to `true` (Scored) +
+Rationale +Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "make-iptables-util-chains" +``` + +Verify there are no results returned. + +**Remediation:** +K3s server needs to be run with the following, `--kube-apiserver-arg='make-iptables-util-chains=true'`. + + +#### 4.2.8 +Ensure that the `--hostname-override` argument is not set (Not Scored) +
+Rationale +Overriding hostnames could potentially break TLS setup between the kubelet and the apiserver. Additionally, with overridden hostnames, it becomes increasingly difficult to associate logs with a particular node and process them for security analytics. Hence, you should setup your kubelet nodes with resolvable FQDNs and avoid overriding the hostnames with IPs. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s does set this parameter for each host, but K3s also manages all certificates in the cluster. It ensures the hostname-override is included as a subject alternative name (SAN) in the kubelet's certificate. + + +#### 4.2.9 +Ensure that the `--event-qps` argument is set to 0 or a level which ensures appropriate event capture (Not Scored) +
+Rationale +It is important to capture all events and not restrict event creation. Events are an important source of security information and analytics that ensure that your environment is consistently monitored using the event data. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +See CIS Benchmark guide for further details on configuring this. + +#### 4.2.10 +Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) +
+Rationale +Kubelet communication contains sensitive parameters that should remain encrypted in transit. Configure the Kubelets to serve only HTTPS traffic. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep -E 'tls-cert-file|tls-private-key-file' +``` + +Verify the `--tls-cert-file` and `--tls-private-key-file` arguments are present and set appropriately. + +**Remediation:** +By default, K3s sets the `--tls-cert-file` and `--tls-private-key-file` arguments when executing the kubelet process. + + +#### 4.2.11 +Ensure that the `--rotate-certificates` argument is not set to `false` (Scored) +
+Rationale + +The `--rotate-certificates` setting causes the kubelet to rotate its client certificates by creating new CSRs as its existing credentials expire. This automated periodic rotation ensures that there is no downtime due to expired certificates and thus addressing availability in the CIA security triad. + +**Note:** This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. + +**Note:**This feature also requires the `RotateKubeletClientCertificate` feature gate to be enabled (which is the default since Kubernetes v1.7) +
+ +**Result:** Not Applicable + +**Remediation:** +By default, K3s implements its own logic for certificate generation and rotation. + + +#### 4.2.12 +Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) +
+Rationale +`RotateKubeletServerCertificate` causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. This automated periodic rotation ensures that there are no downtimes due to expired certificates and thus addressing availability in the CIA security triad. + +Note: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. +
+ +**Result:** Not Applicable + +**Remediation:** +By default, K3s implements its own logic for certificate generation and rotation. + + +#### 4.2.13 +Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Not Scored) +
+Rationale +TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. By default Kubernetes supports a number of TLS ciphersuites including some that have security concerns, weakening the protection provided. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +Configuration of the parameter is dependent on your use case. Please see the CIS Kubernetes Benchmark for suggestions on configuring this for your use-case. + + +## 5 Kubernetes Policies + + +### 5.1 RBAC and Service Accounts + + +#### 5.1.1 +Ensure that the cluster-admin role is only used where required (Not Scored) +
+Rationale +Kubernetes provides a set of default roles where RBAC is used. Some of these roles such as `cluster-admin` provide wide-ranging privileges which should only be applied where absolutely necessary. Roles such as `cluster-admin` allow super-user access to perform any action on any resource. When used in a `ClusterRoleBinding`, it gives full control over every resource in the cluster and in all namespaces. When used in a `RoleBinding`, it gives full control over every resource in the rolebinding's namespace, including the namespace itself. +
+ +**Result:** Pass + +**Remediation:** +K3s does not make inappropriate use of the cluster-admin role. Operators must audit their workloads of additional usage. See the CIS Benchmark guide for more details. + +#### 5.1.2 +Minimize access to secrets (Not Scored) +
+Rationale +Inappropriate access to secrets stored within the Kubernetes cluster can allow for an attacker to gain additional access to the Kubernetes cluster or external resources whose credentials are stored as secrets. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +K3s limits its use of secrets for the system components appropriately, but operators must audit the use of secrets by their workloads. See the CIS Benchmark guide for more details. + +#### 5.1.3 +Minimize wildcard use in Roles and ClusterRoles (Not Scored) +
+Rationale +The principle of least privilege recommends that users are provided only the access required for their role and nothing more. The use of wildcard rights grants is likely to provide excessive rights to the Kubernetes API. +
+ +**Result:** Not Scored - Operator Dependent + +**Audit:** +Run the below command on the master node. + +```bash +# Retrieve the roles defined across each namespaces in the cluster and review for wildcards +kubectl get roles --all-namespaces -o yaml + +# Retrieve the cluster roles defined in the cluster and review for wildcards +kubectl get clusterroles -o yaml +``` + +Verify that there are not wildcards in use. + +**Remediation:** +Operators should review their workloads for proper role usage. See the CIS Benchmark guide for more details. + +#### 5.1.4 +Minimize access to create pods (Not Scored) +
+Rationale +The ability to create pods in a cluster opens up possibilities for privilege escalation and should be restricted, where possible. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +Operators should review who has access to create pods in their cluster. See the CIS Benchmark guide for more details. + +#### 5.1.5 +Ensure that default service accounts are not actively used. (Scored) +
+Rationale +Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. + +Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. + +The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. +
+ +**Result:** Fail. Currently requires operator intervention See the [Harending Guide](../hardening_guide/_) for details. + +**Audit:** +For each namespace in the cluster, review the rights assigned to the default service account and ensure that it has no roles or cluster roles bound to it apart from the defaults. Additionally ensure that the automountServiceAccountToken: false setting is in place for each default service account. + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value + +``` bash +automountServiceAccountToken: false +``` + + +#### 5.1.6 +Ensure that Service Account Tokens are only mounted where necessary (Not Scored) +
+Rationale +Mounting service account tokens inside pods can provide an avenue for privilege escalation attacks where an attacker is able to compromise a single pod in the cluster. + +Avoiding mounting these tokens removes this attack avenue. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +The pods launched by K3s are part of the control plane and generally need access to communicate with the API server, thus this control does not apply to them. Operators should review their workloads and take steps to modify the definition of pods and service accounts which do not need to mount service account tokens to disable it. + +### 5.2 Pod Security Policies + + +#### 5.2.1 +Minimize the admission of containers wishing to share the host process ID namespace (Scored) +
+Rationale +Privileged containers have access to all Linux Kernel capabilities and devices. A container running with full privileges can do almost everything that the host can do. This flag exists to allow special use-cases, like manipulating the network stack and accessing devices. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit privileged containers. + +If you need to run privileged containers, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl describe psp | grep MustRunAsNonRoot +``` + +Verify that the result is `Rule: MustRunAsNonRoot`. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `Rule` value to `MustRunAsNonRoot`. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.2 +Minimize the admission of containers wishing to share the host process ID namespace (Scored) +
+Rationale +A container running in the host's PID namespace can inspect processes running outside the container. If the container also has access to ptrace capabilities this can be used to escalate privileges outside of the container. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host PID namespace. + +If you need to run containers which require hostPID, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `hostPID` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.3 +Minimize the admission of containers wishing to share the host IPC namespace (Scored) +
+Rationale + +A container running in the host's IPC namespace can use IPC to interact with processes outside the container. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host IPC namespace. + +If you have a requirement to containers which require hostIPC, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `HostIPC` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.4 +Minimize the admission of containers wishing to share the host network namespace (Scored) +
+Rationale +A container running in the host's network namespace could access the local loopback device, and could access network traffic to and from other pods. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host network namespace. + +If you have need to run containers which require hostNetwork, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `HostNetwork` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.5 +Minimize the admission of containers with `allowPrivilegeEscalation` (Scored) +
+Rationale +A container running with the `allowPrivilegeEscalation` flag set to true may have processes that can gain more privileges than their parent. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to allow privilege escalation. The option exists (and is defaulted to true) to permit setuid binaries to run. + +If you have need to run containers which use setuid binaries or require privilege escalation, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `allowPrivilegeEscalation` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.6 +Minimize the admission of root containers (Not Scored) +
+Rationale +Containers may run as any Linux user. Containers which run as the root user, whilst constrained by Container Runtime security features still have an escalated likelihood of container breakout. + +Ideally, all containers should run as a defined non-UID 0 user. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit root users in a container. + +If you need to run root containers, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `runAsUser.Rule` value to `MustRunAsNonRoot`. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.7 +Minimize the admission of containers with the NET_RAW capability (Not Scored) +
+Rationale +Containers run with a default set of capabilities as assigned by the Container Runtime. By default this can include potentially dangerous capabilities. With Docker as the container runtime the NET_RAW capability is enabled which may be misused by malicious containers. + +Ideally, all containers should drop this capability. + +There should be at least one PodSecurityPolicy (PSP) defined which prevents containers with the NET_RAW capability from launching. + +If you need to run containers with this capability, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .spec.requiredDropCapabilities[] +``` + +Verify the value is `"ALL"`. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets `.spec.requiredDropCapabilities[]` to a value of `All`. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.8 +Minimize the admission of containers with added capabilities (Not Scored) +
+Rationale +Containers run with a default set of capabilities as assigned by the Container Runtime. Capabilities outside this set can be added to containers which could expose them to risks of container breakout attacks. + +There should be at least one PodSecurityPolicy (PSP) defined which prevents containers with capabilities beyond the default set from launching. + +If you need to run containers with additional capabilities, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp +``` + +Verify that there are no PSPs present which have `allowedCapabilities` set to anything other than an empty array. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets `allowedCapabilities` to anything other than an empty array. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.9 +Minimize the admission of containers with capabilities assigned (Not Scored) +
+Rationale +Containers run with a default set of capabilities as assigned by the Container Runtime. Capabilities are parts of the rights generally granted on a Linux system to the root user. + +In many cases applications running in containers do not require any capabilities to operate, so from the perspective of the principle of least privilege use of capabilities should be minimized. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp +``` + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets `requiredDropCapabilities` to `ALL`. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +### 5.3 Network Policies and CNI + + +#### 5.3.1 +Ensure that the CNI in use supports Network Policies (Not Scored) +
+Rationale +Kubernetes network policies are enforced by the CNI plugin in use. As such it is important to ensure that the CNI plugin supports both Ingress and Egress network policies. +
+ +**Result:** Pass + +**Audit:** +Review the documentation of CNI plugin in use by the cluster, and confirm that it supports Ingress and Egress network policies. + +**Remediation:** +By default, K3s use Canal (Calico and Flannel) and fully supports network policies. + + +#### 5.3.2 +Ensure that all Namespaces have Network Policies defined (Scored) +
+Rationale +Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints. + +Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +for i in kube-system kube-public default; do + kubectl get networkpolicies -n $i; +done +``` + +Verify that there are network policies applied to each of the namespaces. + +**Remediation:** +An operator should apply NetworkPolcyies that prevent unneeded traffic from traversing networks unnecessarily. An example of applying a NetworkPolcy can be found in the [Hardening Guide](../hardening_guide/). + +### 5.4 Secrets Management + + +#### 5.4.1 +Prefer using secrets as files over secrets as environment variables (Not Scored) +
+Rationale +It is reasonably common for application code to log out its environment (particularly in the event of an error). This will include any secret values passed in as environment variables, so secrets can easily be exposed to any user or entity who has access to the logs. +
+ +**Result:** Not Scored + +**Audit:** +Run the following command to find references to objects which use environment variables defined from secrets. + +```bash +kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A +``` + +**Remediation:** +If possible, rewrite application code to read secrets from mounted secret files, rather than from environment variables. + + +#### 5.4.2 +Consider external secret storage (Not Scored) +
+Rationale +Kubernetes supports secrets as first-class objects, but care needs to be taken to ensure that access to secrets is carefully limited. Using an external secrets provider can ease the management of access to secrets, especially where secrets are used across both Kubernetes and non-Kubernetes environments. +
+ +**Result:** Not Scored + +**Audit:** +Review your secrets management implementation. + +**Remediation:** +Refer to the secrets management options offered by your cloud provider or a third-party secrets management solution. + + +### 5.5 Extensible Admission Control + + +#### 5.5.1 +Configure Image Provenance using ImagePolicyWebhook admission controller (Not Scored) +
+Rationale +Kubernetes supports plugging in provenance rules to accept or reject the images in your deployments. You could configure such rules to ensure that only approved images are deployed in the cluster. +
+ +**Result:** Not Scored + +**Audit:** +Review the pod definitions in your cluster and verify that image _provenance_ is configured as appropriate. + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + + +### 5.6 Omitted +The v1.5.1 Benchmark skips 5.6 and goes from 5.5 to 5.7. We are including it here merely for explanation. + + +### 5.7 General Policies +These policies relate to general cluster management topics, like namespace best practices and policies applied to pod objects in the cluster. + + +#### 5.7.1 +Create administrative boundaries between resources using namespaces (Not Scored) +
+Rationale +Limiting the scope of user permissions can reduce the impact of mistakes or malicious activities. A Kubernetes namespace allows you to partition created resources into logically named groups. Resources created in one namespace can be hidden from other namespaces. By default, each resource created by a user in Kubernetes cluster runs in a default namespace, called default. You can create additional namespaces and attach resources and users to them. You can use Kubernetes Authorization plugins to create policies that segregate access to namespace resources between different users. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command and review the namespaces created in the cluster. + +```bash +kubectl get namespaces +``` + +Ensure that these namespaces are the ones you need and are adequately administered as per your requirements. + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need them. + + +#### 5.7.2 +Ensure that the seccomp profile is set to `docker/default` in your pod definitions (Not Scored) +
+Rationale +Seccomp (secure computing mode) is used to restrict the set of system calls applications can make, allowing cluster administrators greater control over the security of workloads running in the cluster. Kubernetes disables seccomp profiles by default for historical reasons. You should enable it to ensure that the workloads have restricted actions available within the container. +
+ +**Result:** Not Scored + +**Audit:** +Review the pod definitions in your cluster. It should create a line as below: + +```yaml +annotations: + seccomp.security.alpha.kubernetes.io/pod: docker/default +``` + +**Remediation:** +Review the Kubernetes documentation and if needed, apply a relevant PodSecurityPolicy. + +#### 5.7.3 +Apply Security Context to Your Pods and Containers (Not Scored) +
+Rationale +A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container. When designing your containers and pods, make sure that you configure the security context for your pods, containers, and volumes. A security context is a property defined in the deployment yaml. It controls the security parameters that will be assigned to the pod/container/volume. There are two levels of security context: pod level security context, and container-level security context. +
+ +**Result:** Not Scored + +**Audit:** +Review the pod definitions in your cluster and verify that you have security contexts defined as appropriate. + +**Remediation:** +Follow the Kubernetes documentation and apply security contexts to your pods. For a suggested list of security contexts, you may refer to the CIS Security Benchmark. + + +#### 5.7.4 +The default namespace should not be used (Scored) +
+Rationale +Resources in a Kubernetes cluster should be segregated by namespace, to allow for security controls to be applied at that level and to make it easier to manage resources. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get all -n default +``` + +The only entries there should be system-managed resources such as the kubernetes service. + +**Remediation:** +By default, K3s does not utilize the default namespace.