From 6cbc1a670dc44c8feb6fc1f564cd230c4426f361 Mon Sep 17 00:00:00 2001 From: Kourosh Maneshni <39573922+kourosh7@users.noreply.github.com> Date: Thu, 9 Mar 2023 13:46:30 -0800 Subject: [PATCH 01/22] Update workload-ingress.md Update section 1. Deploying a Workload> step #7 to say "Container Image" as shown in the UI. --- .../quick-start-guides/deploy-workloads/workload-ingress.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md b/docs/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md index b293b362b04..8dfb0a8cc95 100644 --- a/docs/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md +++ b/docs/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md @@ -18,7 +18,7 @@ For this workload, you'll be deploying the application Rancher Hello-World. 1. Click **Create**. 1. Click **Deployment**. 1. Enter a **Name** for your workload. -1. From the **Docker Image** field, enter `rancher/hello-world`. This field is case-sensitive. +1. From the **Container Image** field, enter `rancher/hello-world`. This field is case-sensitive. 1. Click **Add Port** and enter `80` in the **Private Container Port** field. Adding a port enables access to the application inside and outside of the cluster. For more information, see [Services](../../../pages-for-subheaders/workloads-and-pods.md#services). 1. Click **Create**. From 5d6755d4e40aac92ae985935a65c760f754d64c0 Mon Sep 17 00:00:00 2001 From: martyav Date: Fri, 10 Mar 2023 12:14:58 -0500 Subject: [PATCH 02/22] updating links to point to /all-supported-versions to fix problem with redirects pointing to 2.6.3, not the latest Rancher version --- docs/pages-for-subheaders/installation-requirements.md | 8 ++++---- .../pages-for-subheaders/installation-requirements.md | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/pages-for-subheaders/installation-requirements.md b/docs/pages-for-subheaders/installation-requirements.md index 8e3d3876100..8431f164d4b 100644 --- a/docs/pages-for-subheaders/installation-requirements.md +++ b/docs/pages-for-subheaders/installation-requirements.md @@ -17,13 +17,13 @@ See our page on [best practices](../reference-guides/best-practices/rancher-serv ## Kubernetes Compatibility with Rancher -Rancher needs to be installed on a supported Kubernetes version. Consult the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix) to ensure that your intended version of Kubernetes is supported. +Rancher needs to be installed on a supported Kubernetes version. Consult the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) to ensure that your intended version of Kubernetes is supported. ## Operating Systems and Container Runtime Requirements All supported operating systems are 64-bit x86. Rancher should work with any modern Linux distribution. -The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix) lists which OS and Docker versions were tested for each Rancher version. +The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) lists which OS and Docker versions were tested for each Rancher version. Docker is required for nodes that will run RKE clusters. It is not required for RKE2 or K3s clusters. @@ -45,7 +45,7 @@ For more information see [Installing Docker,](../getting-started/installation-an For the container runtime, K3s bundles its own containerd by default. Alternatively, you can configure K3s to use an already installed Docker runtime. For more information on using K3s with Docker see the [K3s documentation.](https://docs.k3s.io/advanced#using-docker-as-the-container-runtime) -Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix). To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script. +Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions). To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script. If you are installing Rancher on a K3s cluster with **Raspbian Buster**, follow [these steps](https://rancher.com/docs/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables. @@ -55,7 +55,7 @@ If you are installing Rancher on a K3s cluster with Alpine Linux, follow [these For the container runtime, RKE2 bundles its own containerd. Docker is not required for RKE2 installs. -For details on which OS versions were tested with RKE2, refer to the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix). +For details on which OS versions were tested with RKE2, refer to the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions). ## Hardware Requirements diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/installation-requirements.md b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/installation-requirements.md index 236ba86e164..0328552e0e8 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/installation-requirements.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/installation-requirements.md @@ -17,13 +17,13 @@ Rancher UI 在基于 Firefox 或 Chromium 的浏览器(Chrome、Edge、Opera ## Kubernetes 与 Rancher 的兼容性 -Rancher 需要安装在支持的 Kubernetes 版本上。请查阅 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix),确保你的 Kubernetes 版本受支持。 +Rancher 需要安装在支持的 Kubernetes 版本上。请查阅 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions),确保你的 Kubernetes 版本受支持。 ## 操作系统和容器运行时要求 所有支持的操作系统都使用 64-bit x86 架构。Rancher 兼容当前所有的主流 Linux 发行版。 -[Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix)列出了每个 Rancher 版本测试过的操作系统和 Docker 版本。 +[Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions)列出了每个 Rancher 版本测试过的操作系统和 Docker 版本。 运行 RKE 集群的节点需要安装 Docker。RKE2 或 K3s 集群不需要它。 @@ -45,7 +45,7 @@ Rancher 需要安装在支持的 Kubernetes 版本上。请查阅 [Rancher 支 对于容器运行时,K3s 默认附带了自己的 containerd。你也可以将 K3s 配置为使用已安装的 Docker 运行时。有关在 Docker 中使用 K3s 的更多信息,请参阅 [K3s 文档](https://docs.k3s.io/advanced#using-docker-as-the-container-runtime)。 -Rancher 需要安装在支持的 Kubernetes 版本上。如需了解你使用的 Rancher 版本支持哪些 Kubernetes 版本,请参见 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix)。如需指定 K3s 版本,在运行 K3s 安装脚本时,使用 `INSTALL_K3S_VERSION` 环境变量。 +Rancher 需要安装在支持的 Kubernetes 版本上。如需了解你使用的 Rancher 版本支持哪些 Kubernetes 版本,请参见 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions)。如需指定 K3s 版本,在运行 K3s 安装脚本时,使用 `INSTALL_K3S_VERSION` 环境变量。 如果你使用 **Raspbian Buster** 在 K3s 集群上安装 Rancher,请按照[这些步骤](https://rancher.com/docs/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster)切换到旧版 iptables。 @@ -55,7 +55,7 @@ Rancher 需要安装在支持的 Kubernetes 版本上。如需了解你使用的 对于容器运行时,RKE2 附带了自己的 containerd。RKE2 安装不需要 Docker。 -如需了解 RKE2 通过了哪些操作系统版本的测试,请参见 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix)。 +如需了解 RKE2 通过了哪些操作系统版本的测试,请参见 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions)。 ## 硬件要求 From bfe1791de50c0234b073d26c9df65d7865b6fb96 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Fri, 10 Mar 2023 11:10:19 -0800 Subject: [PATCH 03/22] Update field label to say 'Container Image' as shown in the UI --- .../quick-start-guides/deploy-workloads/workload-ingress.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/versioned_docs/version-2.6/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md b/versioned_docs/version-2.6/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md index 8f1ac728a7d..84fcb0f1b46 100644 --- a/versioned_docs/version-2.6/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md +++ b/versioned_docs/version-2.6/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md @@ -18,7 +18,7 @@ For this workload, you'll be deploying the application Rancher Hello-World. 1. Click **Create**. 1. Click **Deployment**. 1. Enter a **Name** for your workload. -1. From the **Docker Image** field, enter `rancher/hello-world`. This field is case-sensitive. +1. From the **Container Image** field, enter `rancher/hello-world`. This field is case-sensitive. 1. Click **Add Port** and `Cluster IP` for the `Service Type` and enter `80` in the **Private Container Port** field. You may leave the `Name` blank or specify any name that you wish. Adding a port enables access to the application inside and outside of the cluster. For more information, see [Services](../../../pages-for-subheaders/workloads-and-pods.md#services). 1. Click **Create**. From d33b296b9bc3be787f20927b01f33d0ac666f01d Mon Sep 17 00:00:00 2001 From: Kyr Shatskyy Date: Fri, 17 Mar 2023 22:45:08 +0100 Subject: [PATCH 04/22] Accent the need of INSTALL_K3S_VERSION parameter Move the clause about INSTALL_K3S_VERSION before get.k3s.io cluster installation command example to emphasize its requirement to get the correct version of k3s on which the rancher should will be installed, because it can be an annoying mistake that user might start from setup from scratch. Signed-off-by: Kyr Shatskyy --- .../quick-start-guides/deploy-rancher-manager/helm-cli.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md index ec47ac7df19..06b62a7841e 100644 --- a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md +++ b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md @@ -15,14 +15,14 @@ The full installation requirements are [here](../../../pages-for-subheaders/inst ## Install K3s on Linux +Rancher needs to be installed on a supported Kubernetes version. To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script. Refer to the [support maintenance terms](https://rancher.com/support-maintenance-terms/). + Install a K3s cluster by running this command on the Linux machine: ``` curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="***" sh -s - server --cluster-init ``` -Rancher needs to be installed on a supported Kubernetes version. To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script. Refer to the [support maintenance terms](https://rancher.com/support-maintenance-terms/). - Using `--cluster-init` allows K3s to use embedded etcd as the datastore and has the ability to convert to an HA setup. Refer to [High Availability with Embedded DB](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/). Save the IP of the Linux machine. From 86b4b5fd972ebe90d5aa92a723b950a00bc12617 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Fri, 17 Mar 2023 18:00:53 -0400 Subject: [PATCH 05/22] Typo fix for link (#497) * typo fix for link * added fix for versioned docs --- docs/pages-for-subheaders/rancher-security.md | 2 +- .../version-2.0-2.4/pages-for-subheaders/rancher-security.md | 2 +- .../version-2.5/pages-for-subheaders/rancher-security.md | 2 +- .../version-2.6/pages-for-subheaders/rancher-security.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/pages-for-subheaders/rancher-security.md b/docs/pages-for-subheaders/rancher-security.md index 53ad77d370f..0db711d224b 100644 --- a/docs/pages-for-subheaders/rancher-security.md +++ b/docs/pages-for-subheaders/rancher-security.md @@ -54,7 +54,7 @@ We provide two RPMs (Red Hat packages) that enable Rancher products to function The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. -The hardening guides provide prescriptive guidance for hardening a production installation of Rancher. See Rancher's guides for [Self Assessment of the CIS Kubernetes Benchmark](#the-cis-benchmark-and-self-sssessment) for the full list of security controls. +The hardening guides provide prescriptive guidance for hardening a production installation of Rancher. See Rancher's guides for [Self Assessment of the CIS Kubernetes Benchmark](#the-cis-benchmark-and-self-assessment) for the full list of security controls. > The hardening guides describe how to secure the nodes in your cluster, and it is recommended to follow a hardening guide before installing Kubernetes. diff --git a/versioned_docs/version-2.0-2.4/pages-for-subheaders/rancher-security.md b/versioned_docs/version-2.0-2.4/pages-for-subheaders/rancher-security.md index 5d693cbb46e..28001769571 100644 --- a/versioned_docs/version-2.0-2.4/pages-for-subheaders/rancher-security.md +++ b/versioned_docs/version-2.0-2.4/pages-for-subheaders/rancher-security.md @@ -51,7 +51,7 @@ For details, refer to the section on [security scans.](cis-scans) The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. -The hardening guide provides prescriptive guidance for hardening a production installation of Rancher v2.1.x, v2.2.x and v.2.3.x. See Rancher's guides for [Self Assessment of the CIS Kubernetes Benchmark](#the-cis-benchmark-and-self-sssessment) for the full list of security controls. +The hardening guide provides prescriptive guidance for hardening a production installation of Rancher v2.1.x, v2.2.x and v.2.3.x. See Rancher's guides for [Self Assessment of the CIS Kubernetes Benchmark](#the-cis-benchmark-and-self-assessment) for the full list of security controls. > The hardening guides describe how to secure the nodes in your cluster, and it is recommended to follow a hardening guide before installing Kubernetes. diff --git a/versioned_docs/version-2.5/pages-for-subheaders/rancher-security.md b/versioned_docs/version-2.5/pages-for-subheaders/rancher-security.md index 628515eadcc..ceb1ad9cb70 100644 --- a/versioned_docs/version-2.5/pages-for-subheaders/rancher-security.md +++ b/versioned_docs/version-2.5/pages-for-subheaders/rancher-security.md @@ -48,7 +48,7 @@ We provide two RPMs (Red Hat packages) that enable Rancher products to function The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. -The hardening guides provide prescriptive guidance for hardening a production installation of Rancher. See Rancher's guides for [Self Assessment of the CIS Kubernetes Benchmark](#the-cis-benchmark-and-self-sssessment) for the full list of security controls. +The hardening guides provide prescriptive guidance for hardening a production installation of Rancher. See Rancher's guides for [Self Assessment of the CIS Kubernetes Benchmark](#the-cis-benchmark-and-self-assessment) for the full list of security controls. > The hardening guides describe how to secure the nodes in your cluster, and it is recommended to follow a hardening guide before installing Kubernetes. diff --git a/versioned_docs/version-2.6/pages-for-subheaders/rancher-security.md b/versioned_docs/version-2.6/pages-for-subheaders/rancher-security.md index 53ad77d370f..0db711d224b 100644 --- a/versioned_docs/version-2.6/pages-for-subheaders/rancher-security.md +++ b/versioned_docs/version-2.6/pages-for-subheaders/rancher-security.md @@ -54,7 +54,7 @@ We provide two RPMs (Red Hat packages) that enable Rancher products to function The Rancher Hardening Guide is based on controls and best practices found in the CIS Kubernetes Benchmark from the Center for Internet Security. -The hardening guides provide prescriptive guidance for hardening a production installation of Rancher. See Rancher's guides for [Self Assessment of the CIS Kubernetes Benchmark](#the-cis-benchmark-and-self-sssessment) for the full list of security controls. +The hardening guides provide prescriptive guidance for hardening a production installation of Rancher. See Rancher's guides for [Self Assessment of the CIS Kubernetes Benchmark](#the-cis-benchmark-and-self-assessment) for the full list of security controls. > The hardening guides describe how to secure the nodes in your cluster, and it is recommended to follow a hardening guide before installing Kubernetes. From d48546c707f0aa348690c4da6802c2a381b28f3c Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Fri, 17 Mar 2023 15:02:55 -0700 Subject: [PATCH 06/22] Remove
so link renders correctly (#494) --- .../version-2.6/pages-for-subheaders/authentication-config.md | 1 - 1 file changed, 1 deletion(-) diff --git a/versioned_docs/version-2.6/pages-for-subheaders/authentication-config.md b/versioned_docs/version-2.6/pages-for-subheaders/authentication-config.md index c1787591568..d28bfae8edb 100644 --- a/versioned_docs/version-2.6/pages-for-subheaders/authentication-config.md +++ b/versioned_docs/version-2.6/pages-for-subheaders/authentication-config.md @@ -26,7 +26,6 @@ The Rancher authentication proxy integrates with the following external authenti | [Google OAuth](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth.md) | | [Shibboleth](configure-shibboleth-saml.md) | -
However, Rancher also provides [local authentication](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/create-local-users.md). In most cases, you should use an external authentication service over local authentication, as external authentication allows user management from a central location. However, you may want a few local authentication users for managing Rancher under rare circumstances, such as if your external authentication provider is unavailable or undergoing maintenance. From 10802befe2bf1e14dedcbefede559ad52af0b52e Mon Sep 17 00:00:00 2001 From: Kyr Shatskyy Date: Fri, 17 Mar 2023 23:06:00 +0100 Subject: [PATCH 07/22] Give a user better idea how INSTALL_K3S_VERSION could look Instead of sending user to find the real number k3s tag format, just give recent version of k3s which is needed by latest rancher. It would be great to update this value automatically, but I have no good idea how to do this at the moment. Signed-off-by: Kyr Shatskyy --- .../quick-start-guides/deploy-rancher-manager/helm-cli.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md index 06b62a7841e..826e91bd272 100644 --- a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md +++ b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md @@ -20,7 +20,7 @@ Rancher needs to be installed on a supported Kubernetes version. To specify the Install a K3s cluster by running this command on the Linux machine: ``` -curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="***" sh -s - server --cluster-init +curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.24.11+k3s1" sh -s - server --cluster-init ``` Using `--cluster-init` allows K3s to use embedded etcd as the datastore and has the ability to convert to an HA setup. Refer to [High Availability with Embedded DB](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/). From 4388ffb703adef48db855ddab7e07b9c4ca5bed1 Mon Sep 17 00:00:00 2001 From: Kyr Shatskyy Date: Fri, 17 Mar 2023 23:35:30 +0100 Subject: [PATCH 08/22] Make a note about KUBECONFIG environment variable. A new to k3s user might get "surprised" by error messages from kubectl or k3s like: WARN[0000] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied It can happen when there is a k3s installed on the same machine and `/etc/rancher/k3s/k3s.yaml` present. Signed-off-by: Kyr Shatskyy --- .../quick-start-guides/deploy-rancher-manager/helm-cli.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md index 826e91bd272..f97db8db9f7 100644 --- a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md +++ b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md @@ -38,6 +38,8 @@ The kubeconfig file is important for accessing the Kubernetes cluster. Copy the scp root@:/etc/rancher/k3s/k3s.yaml ~/.kube/config ``` +In some cases it may need to make sure that your shell has the environment variable `KUBECONFIG=~/.kube/config` defined, for instance, it can be exported in your profile or rc files. + From 5323ae3ab1a2cdd9533df5980bbb851864b76691 Mon Sep 17 00:00:00 2001 From: dgiebert Date: Mon, 20 Mar 2023 17:28:23 +0100 Subject: [PATCH 09/22] Added instructions for enabling Istio CNI on k3s (#480) * Added instructions for enabling Istio CNI on k3s * Use tabs for rke2 + k3s * Implement spellcheck feedback --- .../install-istio-on-rke2-cluster.md | 75 ++++++++++++------- 1 file changed, 49 insertions(+), 26 deletions(-) diff --git a/docs/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md b/docs/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md index 75906c85ea5..0c229f288b4 100644 --- a/docs/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md +++ b/docs/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md @@ -1,5 +1,5 @@ --- -title: Additional Steps for Installing Istio on an RKE2 Cluster +title: Additional Steps for Installing Istio on RKE2 and K3s Clusters --- When installing or upgrading the Istio Helm chart through **Apps,** @@ -8,30 +8,53 @@ When installing or upgrading the Istio Helm chart through **Apps,** 1. You will see options for configuring the Istio Helm chart. On the **Components** tab, check the box next to **Enabled CNI**. 1. Add a custom overlay file specifying `cniBinDir` and `cniConfDir`. For more information on these options, refer to the [Istio documentation.](https://istio.io/latest/docs/setup/additional-setup/cni/#helm-chart-parameters) An example is below: - ```yaml - apiVersion: install.istio.io/v1alpha1 - kind: IstioOperator - spec: - components: - cni: - enabled: true - k8s: - overlays: - - apiVersion: "apps/v1" - kind: "DaemonSet" - name: "istio-cni-node" - patches: - - path: spec.template.spec.containers.[name:install-cni].securityContext.privileged - value: true - values: - cni: - image: rancher/mirrored-istio-install-cni:1.9.3 - excludeNamespaces: - - istio-system - - kube-system - logLevel: info - cniBinDir: /opt/cni/bin - cniConfDir: /etc/cni/net.d - ``` + + + +```yaml +apiVersion: install.istio.io/v1alpha1 +kind: IstioOperator +spec: + components: + cni: + enabled: true + k8s: + overlays: + - apiVersion: "apps/v1" + kind: "DaemonSet" + name: "istio-cni-node" + patches: + - path: spec.template.spec.containers.[name:install-cni].securityContext.privileged + value: true + values: + cni: + cniBinDir: /opt/cni/bin + cniConfDir: /etc/cni/net.d +``` + + + +```yaml +apiVersion: install.istio.io/v1alpha1 +kind: IstioOperator +spec: + components: + cni: + enabled: true + k8s: + overlays: + - apiVersion: "apps/v1" + kind: "DaemonSet" + name: "istio-cni-node" + patches: + - path: spec.template.spec.containers.[name:install-cni].securityContext.privileged + value: true + values: + cni: + cniBinDir: /var/lib/rancher/k3s/data/current/bin + cniConfDir: /var/lib/rancher/k3s/agent/etc/cni/net.d +``` + + **Result:** Now you should be able to utilize Istio as desired, including sidecar injection and monitoring via Kiali. From bc9cff3a13b24876ad582eae89c1b2b084fb91f9 Mon Sep 17 00:00:00 2001 From: kyr Date: Tue, 21 Mar 2023 20:26:01 +0100 Subject: [PATCH 10/22] Update docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md Co-authored-by: Billy Tat --- .../quick-start-guides/deploy-rancher-manager/helm-cli.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md index f97db8db9f7..b9ad07fa497 100644 --- a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md +++ b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md @@ -15,7 +15,7 @@ The full installation requirements are [here](../../../pages-for-subheaders/inst ## Install K3s on Linux -Rancher needs to be installed on a supported Kubernetes version. To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script. Refer to the [support maintenance terms](https://rancher.com/support-maintenance-terms/). +Rancher needs to be installed on a supported Kubernetes version. To specify the K3s version, use the INSTALL_K3S_VERSION (e.g., `INSTALL_K3S_VERSION="v1.24.10+k3s1"`) environment variable when running the K3s installation script. Refer to the [support maintenance terms](https://rancher.com/support-maintenance-terms/). Install a K3s cluster by running this command on the Linux machine: From fbea12b5d8a911bc65b215d67cbbe49d4bb7f3c9 Mon Sep 17 00:00:00 2001 From: kyr Date: Tue, 21 Mar 2023 20:26:36 +0100 Subject: [PATCH 11/22] Update docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md Co-authored-by: Billy Tat --- .../quick-start-guides/deploy-rancher-manager/helm-cli.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md index b9ad07fa497..7093fd95f44 100644 --- a/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md +++ b/docs/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli.md @@ -20,7 +20,7 @@ Rancher needs to be installed on a supported Kubernetes version. To specify the Install a K3s cluster by running this command on the Linux machine: ``` -curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.24.11+k3s1" sh -s - server --cluster-init +curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION= sh -s - server --cluster-init ``` Using `--cluster-init` allows K3s to use embedded etcd as the datastore and has the ability to convert to an HA setup. Refer to [High Availability with Embedded DB](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/). From 9a6dadaf40a049fd8e64db5ffa69a68041bab7d1 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Fri, 24 Mar 2023 10:37:42 -0400 Subject: [PATCH 12/22] [DO NOT MERGE] #300 Missing documentation: Rancher monitoring requires port 10254 (#451) * #300 Missing documentation: Rancher monitoring requires port 10254 * capitalization * line break typo * typo in heading * additional context for when you may have to open port 10254 * reworded note about 10254 and v1/pushprox * clarified what setting indicates pushprox is disabled * typo --- .../enable-monitoring.md | 11 ++++---- .../monitoring-and-alerting.md | 28 +++++++++---------- src/components/PortsCustomNodes.js | 2 +- 3 files changed, 20 insertions(+), 21 deletions(-) diff --git a/docs/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring.md b/docs/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring.md index 40e040f50de..2787854f91b 100644 --- a/docs/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring.md +++ b/docs/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring.md @@ -10,10 +10,11 @@ You can enable monitoring with or without SSL. ## Requirements -- Make sure that you are allowing traffic on port 9796 for each of your nodes because Prometheus will scrape metrics from here. -- Make sure your cluster fulfills the resource requirements. The cluster should have at least 1950Mi memory available, 2700m CPU, and 50Gi storage. A breakdown of the resource limits and requests is [here.](../../../reference-guides/monitoring-v2-configuration/helm-chart-options.md#configuring-resource-limits-and-requests) -- When installing monitoring on an RKE cluster using RancherOS or Flatcar Linux nodes, change the etcd node certificate directory to `/opt/rke/etc/kubernetes/ssl`. -- For clusters provisioned with the RKE CLI and the address is set to a hostname instead of an IP address, set `rkeEtcd.clients.useLocalhost` to `true` during the Values configuration step of the installation. The YAML snippet will look like the following: +- Allow traffic on port 9796 for each of your nodes. Prometheus scrapes metrics from these ports. + - You may also need to allow traffic on port 10254 for each of your nodes, if [PushProx](../../../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md#pushprox) is disabled (`ingressNginx.enabled` set to `false`), or you've upgraded from a previous Rancher version that had v1 monitoring already installed. +- Make sure that your cluster fulfills the resource requirements. The cluster should have at least 1950Mi memory available, 2700m CPU, and 50Gi storage. See [Configuring Resource Limits and Requests](../../../reference-guides/monitoring-v2-configuration/helm-chart-options.md#configuring-resource-limits-and-requests) for a breakdown of the resource limits and requests. +- When you install monitoring on an RKE cluster that uses RancherOS or Flatcar Linux nodes, change the etcd node certificate directory to `/opt/rke/etc/kubernetes/ssl`. +- For clusters that have been provisioned with the RKE CLI and that have the address set to a hostname instead of an IP address, set `rkeEtcd.clients.useLocalhost` to `true` when you configure the Values during installation. For example: ```yaml rkeEtcd: @@ -27,7 +28,7 @@ If you want to set up Alertmanager, Grafana or Ingress, it has to be done with t ::: -#Setting Resource Limits and Requests +## Setting Resource Limits and Requests The resource requests and limits can be configured when installing `rancher-monitoring`. To configure Prometheus resources from the Rancher UI, click **Apps > Monitoring** in the upper left corner. diff --git a/docs/pages-for-subheaders/monitoring-and-alerting.md b/docs/pages-for-subheaders/monitoring-and-alerting.md index 7fce80ddb37..a398b725ede 100644 --- a/docs/pages-for-subheaders/monitoring-and-alerting.md +++ b/docs/pages-for-subheaders/monitoring-and-alerting.md @@ -14,18 +14,16 @@ By viewing data that Prometheus scrapes from your cluster control plane, nodes, The `rancher-monitoring` operator, introduced in Rancher v2.5, is powered by [Prometheus](https://prometheus.io/), [Grafana](https://grafana.com/grafana/), [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator), and the [Prometheus adapter.](https://github.com/DirectXMan12/k8s-prometheus-adapter) -The monitoring application allows you to: +The monitoring application: -- Monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments -- Define alerts based on metrics collected via Prometheus -- Create custom Grafana dashboards -- Configure alert-based notifications via Email, Slack, PagerDuty, etc. using Prometheus Alertmanager -- Defines precomputed, frequently needed or computationally expensive expressions as new time series based on metrics collected via Prometheus -- Expose collected metrics from Prometheus to the Kubernetes Custom Metrics API via Prometheus Adapter for use in HPA +- Monitors the state and processes of your cluster nodes, Kubernetes components, and software deployments. +- Defines alerts based on metrics collected via Prometheus. +- Creates custom Grafana dashboards. +- Configures alert-based notifications via email, Slack, PagerDuty, etc. using Prometheus Alertmanager. +- Defines precomputed, frequently needed or computationally expensive expressions as new time series based on metrics collected via Prometheus. +- Exposes collected metrics from Prometheus to the Kubernetes Custom Metrics API via Prometheus Adapter for use in HPA. -## How Monitoring Works - -For an explanation of how the monitoring components work together, see [this page.](../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md) +See [How Monitoring Works](../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md) for an explanation of how the monitoring components work together. ## Default Components and Deployments @@ -65,7 +63,7 @@ For information on configuring access to monitoring, see [this page.](../integra ### Configuring Monitoring Resources in Rancher -> The configuration reference assumes familiarity with how monitoring components work together. For more information, see [How Monitoring Works.](../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md) +The configuration reference assumes familiarity with how monitoring components work together. For more information, see [How Monitoring Works.](../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md) - [ServiceMonitor and PodMonitor](../reference-guides/monitoring-v2-configuration/servicemonitors-and-podmonitors.md) - [Receiver](../reference-guides/monitoring-v2-configuration/receivers.md) @@ -76,7 +74,7 @@ For information on configuring access to monitoring, see [this page.](../integra ### Configuring Helm Chart Options -For more information on `rancher-monitoring` chart options, including options to set resource limits and requests, see [this page.](../reference-guides/monitoring-v2-configuration/helm-chart-options.md) +For more information on `rancher-monitoring` chart options, including options to set resource limits and requests, see [Helm Chart Options](../reference-guides/monitoring-v2-configuration/helm-chart-options.md). ## Windows Cluster Support @@ -84,11 +82,11 @@ When deployed onto an RKE1 Windows cluster, Monitoring V2 will now automatically To be able to fully deploy Monitoring V2 for Windows, all of your Windows hosts must have a minimum [wins](https://github.com/rancher/wins) version of v0.1.0. -For more details on how to upgrade wins on existing Windows hosts, refer to the section on [Windows cluster support for Monitoring V2.](../integrations-in-rancher/monitoring-and-alerting/windows-support.md) +For more details on how to upgrade wins on existing Windows hosts, see [Windows cluster support for Monitoring V2.](../integrations-in-rancher/monitoring-and-alerting/windows-support.md). ## Known Issues -There is a [known issue](https://github.com/rancher/rancher/issues/28787#issuecomment-693611821) that K3s clusters require more default memory. If you are enabling monitoring on a K3s cluster, we recommend to setting `prometheus.prometheusSpec.resources.memory.limit` to 2500 Mi and `prometheus.prometheusSpec.resources.memory.request` to 1750 Mi. +There is a [known issue](https://github.com/rancher/rancher/issues/28787#issuecomment-693611821) that K3s clusters require more than the allotted default memory. If you enable monitoring on a K3s cluster, set `prometheus.prometheusSpec.resources.memory.limit` to 2500 Mi and `prometheus.prometheusSpec.resources.memory.request` to 1750 Mi. -For tips on debugging high memory usage, see [this page.](../how-to-guides/advanced-user-guides/monitoring-alerting-guides/debug-high-memory-usage.md) +See [Debugging High Memory Usage](../how-to-guides/advanced-user-guides/monitoring-alerting-guides/debug-high-memory-usage.md) for advice and recommendations. diff --git a/src/components/PortsCustomNodes.js b/src/components/PortsCustomNodes.js index 3018b5e8ead..454a4b4415b 100644 --- a/src/components/PortsCustomNodes.js +++ b/src/components/PortsCustomNodes.js @@ -254,7 +254,7 @@ const PortsCustomNodes = () => ( - Notes:

1. Nodes running standalone server or Rancher HA deployment.
2. Required to fetch Rancher chart library.
3. Only without external load balancer in front of Rancher.
4. Local traffic to the node itself (not across nodes).
5. Only if Authorized Cluster Endpoints are activated.
6. Only if using Overlay mode on Windows cluster. + Notes:

1. Nodes running standalone server or Rancher HA deployment.
2. Required to fetch Rancher chart library.
3. Only without external load balancer in front of Rancher.
4. Local traffic to the node itself (not across nodes), if you've enabled optional features such as Rancher Monitoring.
5. Only if Authorized Cluster Endpoints are activated.
6. Only if using Overlay mode on Windows cluster. From 3b371ab635d51ac0249cb6b19b934f2d3be12d2a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Klaus=20K=C3=A4mpf?= Date: Fri, 24 Mar 2023 16:02:47 +0100 Subject: [PATCH 13/22] Fix links to fleet.rancher.io (#506) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The Fleet documentation got restructured a while ago, breaking many links from rancher/rancher-docs Signed-off-by: Klaus Kämpf --- docs/pages-for-subheaders/fleet-gitops-at-scale.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/pages-for-subheaders/fleet-gitops-at-scale.md b/docs/pages-for-subheaders/fleet-gitops-at-scale.md index a521e1b2a18..17696ca5230 100644 --- a/docs/pages-for-subheaders/fleet-gitops-at-scale.md +++ b/docs/pages-for-subheaders/fleet-gitops-at-scale.md @@ -2,7 +2,7 @@ title: Fleet - GitOps at Scale --- -Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It’s also lightweight enough that it works great for a [single cluster](https://fleet.rancher.io/single-cluster-install/) too, but it really shines when you get to a [large scale](https://fleet.rancher.io/multi-cluster-install/). By large scale we mean either a lot of clusters, a lot of deployments, or a lot of teams in a single organization. +Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It’s also lightweight enough that it works great for a [single cluster](https://fleet.rancher.io/tut-deployment#single-cluster-examples) too, but it really shines when you get to a [large scale](https://fleet.rancher.io/tut-deployment#multi-cluster-examples). By large scale we mean either a lot of clusters, a lot of deployments, or a lot of teams in a single organization. Fleet is a separate project from Rancher, and can be installed on any Kubernetes cluster with Helm. @@ -31,7 +31,7 @@ Follow the steps below to access Continuous Delivery in the Rancher UI: 1. Click on **Gitrepos** on the left navigation bar to deploy the gitrepo into your clusters in the current workspace. -1. Select your [git repository](https://fleet.rancher.io/gitrepo-add/) and [target clusters/cluster group](https://fleet.rancher.io/gitrepo-structure/). You can also create the cluster group in the UI by clicking on **Cluster Groups** from the left navigation bar. +1. Select your [git repository](https://fleet.rancher.io/gitrepo-add/) and [target clusters/cluster group](https://fleet.rancher.io/gitrepo-targets/). You can also create the cluster group in the UI by clicking on **Cluster Groups** from the left navigation bar. 1. Once the gitrepo is deployed, you can monitor the application through the Rancher UI. @@ -41,7 +41,7 @@ For details on support for clusters with Windows nodes, see [this page](../integ ## GitHub Repository -The Fleet Helm charts are available [here](https://github.com/rancher/fleet/releases/tag/v0.3.10). +The Fleet Helm charts are available [here](https://github.com/rancher/fleet/releases). ## Using Fleet Behind a Proxy From 8664d43800c9b74c5d1f9d2ff010d62e49c54830 Mon Sep 17 00:00:00 2001 From: Max Sokolovsky Date: Fri, 24 Mar 2023 16:54:10 -0400 Subject: [PATCH 14/22] [2.6] Add to the note about Azure AD permission recommendations --- .../authentication-config/configure-azure-ad.md | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md index c1d93c8480f..274e1eb485e 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md @@ -122,7 +122,21 @@ In Rancher versions 2.6.7-2.6.10, you'll need to use `User.Read.All` and `Group. :::note -Rancher doesn't validate the permissions you grant to the app in Azure. We only support the use of the `Directory.Read.All` application permission. +Rancher doesn't validate the permissions you grant to the app in Azure. You're free to try any permissions you want, as long as they allow Rancher to work with AD users and groups. + + Specifically, Rancher needs permissions that allow the following actions: + - Get a user. + - List all users. + - List groups of which a given user is a member. + - Get a group. + - List all groups. + + Rancher performs these actions either to log in a user or to run a user/group search. Keep in mind that the permissions must be of type `Application`. + + Here are a few examples of permission combinations that satisfy Rancher's needs: + - `Directory.Read.All` + - `User.Read.All` and `GroupMember.Read.All` + - `User.Read.All` and `Group.Read.All` ::: From 9c4fa7b21c74dded7009aa04c0114d1bae547c73 Mon Sep 17 00:00:00 2001 From: Vasili Date: Mon, 27 Mar 2023 17:52:40 +0300 Subject: [PATCH 15/22] Fix grammar (#510) --- .../use-new-nodes-in-an-infra-provider.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/pages-for-subheaders/use-new-nodes-in-an-infra-provider.md b/docs/pages-for-subheaders/use-new-nodes-in-an-infra-provider.md index f96d4fed8c2..f1394eff00f 100644 --- a/docs/pages-for-subheaders/use-new-nodes-in-an-infra-provider.md +++ b/docs/pages-for-subheaders/use-new-nodes-in-an-infra-provider.md @@ -72,7 +72,7 @@ By default, Rancher tries to run the Docker Install script when provisioning RKE #### Node Pool Taints -If you haven't defined [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on your node template, you can add taints for each node pool. The benefit of adding taints at a node pool is beneficial over adding it at a node template is that you can swap out the node templates without worrying if the taint is on the node template. +If you haven't defined [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on your node template, you can add taints for each node pool. The benefit of adding taints to a node pool is that you can change the node template without having to first ensure that the taint exists in the new template. For each taint, they will automatically be added to any created node in the node pool. Therefore, if you add taints to a node pool that have existing nodes, the taints won't apply to existing nodes in the node pool, but any new node added into the node pool will get the taint. @@ -149,4 +149,4 @@ In our [recommended cluster architecture](../how-to-guides/new-user-guides/kuber - At least three nodes with the role etcd to survive losing one node - At least two nodes with the role controlplane for master component high availability -- At least two nodes with the role worker for workload rescheduling upon node failure \ No newline at end of file +- At least two nodes with the role worker for workload rescheduling upon node failure From 1b03e3d29d2cede2d4672803241b12e2eb49944b Mon Sep 17 00:00:00 2001 From: vickyhella Date: Mon, 27 Mar 2023 14:44:22 +0800 Subject: [PATCH 16/22] Delete ZH docs no longer needed --- .../deploy-rancher-manager/aws-marketplace.md | 6 - .../clone-cluster-configuration.md | 110 --------- .../amazon-eks-permissions.md | 109 --------- .../minimum-eks-permissions.md | 223 ------------------ .../clone-cluster-configuration.md | 110 --------- .../amazon-eks-permissions.md | 109 --------- .../minimum-eks-permissions.md | 223 ------------------ 7 files changed, 890 deletions(-) delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/amazon-eks-permissions.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/amazon-eks-permissions/minimum-eks-permissions.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.6/pages-for-subheaders/amazon-eks-permissions.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.6/reference-guides/amazon-eks-permissions/minimum-eks-permissions.md diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md deleted file mode 100644 index 6b2acce89b1..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: Rancher AWS Marketplace 快速入门 -description: 使用 Amazon EKS 部署 Rancher Server。 ---- - -你可以在 AWS 中使用 Amazon EKS 部署 Rancher Server。详情请参见我们的 [Amazon Marketplace 列表](https://aws.amazon.com/marketplace/pp/prodview-go7ent7goo5ae)。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md deleted file mode 100644 index 21cb5570876..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: 克隆集群 ---- - -如果你在 Rancher 中有一个集群并想将这个集群用作创建集群的模板,你可以使用 Rancher CLI 克隆集群的配置,编辑配置,然后使用这些配置来快速启动克隆的集群。 - -不支持复制已注册的集群。 - -| 集群类型 | 是否可克隆 | -|----------------------------------|---------------| -| [由基础设施提供商托管的节点](../../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md) | ✓ | -| [托管的 Kubernetes 提供商](../../../pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers.md) | ✓ | -| [自定义集群](../../../pages-for-subheaders/use-existing-nodes.md) | ✓ | -| [已注册集群](../../new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md) | | - -:::caution - -在复制集群的过程中,你将编辑一个包含所有集群设置的配置文件。但是,由于集群复制 **_not_** 大规模更改配置,因此我们建议仅编辑本文中明确列出的值。编辑其他值可能会使配置文件失效,从而导致集群部署失败。 - -::: - -## 先决条件 - -下载并安装 [Rancher CLI](../../../pages-for-subheaders/cli-with-rancher.md)。如有必要,请[创建 API 持有者令牌](../../../reference-guides/user-settings/api-keys.md)。 - - -## 1. 导出集群配置 - -首先,使用 Rancher CLI 导出要克隆的集群的配置。 - -1. 打开终端并转到 Rancher CLI 二进制文件所在的位置 `rancher`。 - -1. 运行以下命令以列出 Rancher 管理的集群: - - - ./rancher cluster ls - - -1. 找到要克隆的集群,并将其资源 `ID` 或 `NAME` 复制到剪贴板。从此处开始,我们将资源 `ID` 或 `NAME` 称为 ``,它在接下来用作占位符。 - -1. 运行以下命令以导出集群的配置: - - - ./rancher clusters export - - - **步骤结果**:已将克隆集群的 YAML 打印到终端。 - -1. 将 YAML 粘贴到新文件中。将文件另存为 `cluster-template.yml`(或任何其他名称,确保扩展名是 `.yml` 即可)。 - -## 2. 修改集群配置 - -使用文本编辑器为克隆集群修改 `cluster-template.yml` 中的集群配置。 - -:::note - -集群配置参数必须嵌套在 `cluster.yml` 中的 `rancher_kubernetes_engine_config` 下。有关详细信息,请参阅 [Rancher 2.3.0+ 配置文件结构](../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rancher-中的配置文件结构)。 - -::: - -1. 在文本编辑器中打开 `cluster-template.yml`(或你重命名了的配置文件)。 - - :::caution - - 仅需编辑下面明确指出的集群配置项。此文件中列出的很多值均用于配置克隆的集群,因此编辑它们的值可能会中断配置过程。 - - ::: - - -1. 如下例所示,在 `` 占位符处将原始集群的名称替换为唯一名称 (``)。如果克隆的集群名称重复,则集群将无法成功配置。 - - ```yml - Version: v3 - clusters: - : # 输入唯一的名称 - dockerRootDir: /var/lib/docker - enableNetworkPolicy: false - rancherKubernetesEngineConfig: - addonJobTimeout: 30 - authentication: - strategy: x509 - authorization: {} - bastionHost: {} - cloudProvider: {} - ignoreDockerVersion: true - ``` - -1. 对于每个 `nodePools`,将原始节点池名称替换为 `` 占位符处的唯一名称。如果克隆集群具有重复的节点池名称,则集群将无法成功配置。 - - ```yml - nodePools: - : - clusterId: do - controlPlane: true - etcd: true - hostnamePrefix: mark-do - nodeTemplateId: do - quantity: 1 - worker: true - ``` - -1. 完成后,保存并关闭配置。 - -## 3. 启动克隆的集群 - -将 `cluster-template.yml` 移动到 Rancher CLI 二进制文件所在的目录中。然后运行这个命令: - - ./rancher up --file cluster-template.yml - -**结果**:开始配置你克隆的集群。输入 `./rancher cluster ls` 进行确认。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/amazon-eks-permissions.md b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/amazon-eks-permissions.md deleted file mode 100644 index 6de98186062..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/amazon-eks-permissions.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: 创建 EKS 集群 ---- -Amazon EKS 为 Kubernetes 集群提供托管的 controlplane。Amazon EKS 跨多个可用区运行 Kubernetes controlplane 实例,以确保高可用性。Rancher 提供了一个直观的用户界面,用于管理和部署你运行在 Amazon EKS 中的 Kubernetes 集群。通过本指南,你将使用 Rancher 在你的 AWS 账户中快速轻松地启动 Amazon EKS Kubernetes 集群。有关 Amazon EKS 的更多信息,请参阅此[文档](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)。 - - -## Amazon Web 服务的先决条件 - -:::caution - -部署到 Amazon AWS 会产生费用。有关详细信息,请参阅 [EKS 定价页面](https://aws.amazon.com/eks/pricing/)。 - -::: - -要在 EKS 上设置集群,你需要设置 Amazon VPC(虚拟私有云)。你还需要确保用于创建 EKS 集群的账号具有适当的[权限](#最小-eks-权限)。详情请参阅 [Amazon EKS 先决条件官方指南](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-prereqs)。 - -### Amazon VPC - -你需要建立一个 Amazon VPC 来启动 EKS 集群。VPC 使你能够将 AWS 资源启动到你定义的虚拟网络中。你可以自己设置一个 VPC,并在 Rancher 中创建集群时提供它。如果你创建过程中没有提供,Rancher 将创建一个 VPC。详情请参阅[教程:为你的 Amazon EKS 集群创建具有公有和私有子网的 VPC](https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html)。 - -### IAM 策略 - -Rancher 需要访问你的 AWS 账户才能在 Amazon EKS 中预置和管理你的 Kubernetes 集群。你需要在 AWS 账户中为 Rancher 创建一个用户,并定义该用户可以访问的内容。 - -1. 按照[此处](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)的步骤创建具有编程访问权限的用户。 - -2. 创建一个 IAM 策略,定义该用户在 AWS 账户中有权访问的内容。请务必仅授予此用户所需的最小访问权限。[此处](#最小-eks-权限)列出了 EKS 集群所需的最低权限。请按照[此处](https://docs.aws.amazon.com/eks/latest/userguide/EKS_IAM_user_policies.html)的步骤创建 IAM 策略并将策略绑定到你的用户。 - -3. 最后,按照[此处](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey)的步骤为该用户创建访问密钥和密文密钥。 - -:::note 重要提示: - -定期轮换访问密钥和密文密钥非常重要。有关详细信息,请参阅此[文档](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#rotating_access_keys_console)。 - -::: - -有关 EKS 的 IAM 策略的更多详细信息,请参阅 [Amazon EKS IAM 策略、角色和权限的官方文档](https://docs.aws.amazon.com/eks/latest/userguide/IAM_policies.html)。 - - -## 创建 EKS 集群 - -使用 Rancher 配置你的 Kubernetes 集群。 - -1. 点击 **☰ > 集群管理**。 -1. 在**集群**页面上,单击**创建**。 -1. 选择 **Amazon EKS**。 -1. 输入**集群名称**。 -1. 使用**成员角色**为集群配置用户授权。点击**添加成员**添加可以访问集群的用户。使用**角色**下拉菜单为每个用户设置权限。 -1. 完成表单的其余部分。如需帮助,请参阅[配置参考](#eks-集群配置参考)。 -1. 单击**创建**。 - -**结果**: - -你已创建集群,集群的状态是**配置中**。Rancher 已在你的集群中。 - -当集群状态变为 **Active** 后,你可访问集群。 - -**Active** 状态的集群会分配到两个项目: - -- `Default`:包含 `default` 命名空间 -- `System`:包含 `cattle-system`,`ingress-nginx`,`kube-public` 和 `kube-system` 命名空间。 - -## EKS 集群配置参考 - -有关 EKS 集群配置选项的完整列表,请参阅[此页面](../reference-guides/cluster-configuration/rancher-server-configuration/eks-cluster-configuration.md)。 - -## 架构 - -下图展示了 Rancher 2.x 的上层架构。下图中,Rancher Server 管理两个 Kubernetes 集群,其中一个由 RKE 创建,另一个由 EKS 创建。 - -
通过 Rancher 的认证代理管理 Kubernetes 集群
- -![架构](/img/rancher-architecture-rancher-api-server.svg) - -## AWS 服务事件 - -有关 AWS 服务事件的信息,请参阅[此页面](https://status.aws.amazon.com/)。 - -## 安全与合规 - -默认情况下,只有创建集群的 IAM 用户或角色才能访问该集群。在没有额外配置的情况下,使用其他用户或角色访问集群将导致错误。在 Rancher 中,这意味着使用映射到未用于创建集群的用户或角色的凭证,导致未经授权的错误。除非用于注册集群的凭证与 EKSCtl 使用的角色或用户匹配,否则 EKSCtl 集群将不会注册到 Rancher。通过将其他用户和角色添加到 kube-system 命名空间中的 aws-auth configmap,可以授权其他用户和角色访问集群。如需更深入的解释和详细说明,请参阅此[文档](https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/)。 - -有关 Amazon EKS Kubernetes 集群的安全性和合规性的更多信息,请参阅此[文档](https://docs.aws.amazon.com/eks/latest/userguide/shared-responsibilty.html)。 - -## 教程 - -AWS 开源博客上的这篇[教程](https://aws.amazon.com/blogs/opensource/managing-eks-clusters-rancher/)将指导你使用 Rancher 设置一个 EKS 集群,部署一个可公开访问的示例应用来测试集群,并部署一个使用其他开源软件(如 Grafana 和 influxdb)来实时监控地理信息的示例项目。 - -## 最小 EKS 权限 - -请参阅[此页面](../reference-guides/amazon-eks-permissions/minimum-eks-permissions.md),了解在 Rancher 中使用 EKS 驱动所有功能所需的最小权限。 - -## 同步 - -EKS 配置者可以在 Rancher 和提供商之间同步 EKS 集群的状态。有关其工作原理的技术说明,请参阅[同步](../reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters.md)。 - -有关配置刷新间隔的信息,请参阅[本节](../reference-guides/cluster-configuration/rancher-server-configuration/eks-cluster-configuration.md#配置刷新间隔)。 - -## 故障排除 - -如果你的更改被覆盖,可能是集群数据与 EKS 同步的方式导致的。不要在使用其他源(例如 EKS 控制台)对集群进行更改后,又在五分钟之内在 Rancher 中进行更改。有关其工作原理,以及如何配置刷新间隔的信息,请参阅[同步](#同步)。 - -如果在修改或注册集群时返回未经授权的错误,并且集群不是使用你的凭证所属的角色或用户创建的,请参阅[安全与合规](#安全与合规)。 - -有关 Amazon EKS Kubernetes 集群的任何问题或故障排除详细信息,请参阅此[文档](https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html)。 - -## 以编程方式创建 EKS 集群 - -通过 Rancher 以编程方式部署 EKS 集群的最常见方法是使用 Rancher 2 Terraform Provider。详情请参见[使用 Terraform 创建集群](https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/amazon-eks-permissions/minimum-eks-permissions.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/amazon-eks-permissions/minimum-eks-permissions.md deleted file mode 100644 index f2a503dcced..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/amazon-eks-permissions/minimum-eks-permissions.md +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: 最小 EKS 权限 ---- - -此处提供在 Rancher 中使用 EKS 驱动所有功能所需的最小权限。Rancher 需要额外的权限来配置`服务角色`和 `VPC` 资源。你可以选择在创建集群**之前**创建这些资源,以便在定义集群配置时选择这些资源。 - -| 资源 | 描述 | ----------|------------ -| 服务角色 | 服务角色向 Kubernetes 提供管理资源所需的权限。Rancher 可以使用以下[服务角色权限](#服务角色权限)来创建服务角色。 | -| VPC | 提供 EKS 和 Worker 节点使用的隔离网络资源。Rancher 使用以下 [VPC 权限](#vpc-权限)创建 VPC 资源。 | - - -资源定位使用 `*` 作为在 Rancher 中创建 EKS 集群之前,无法已知创建的资源的名称(ARN)。 - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "EC2Permisssions", - "Effect": "Allow", - "Action": [ - "ec2:RunInstances", - "ec2:RevokeSecurityGroupIngress", - "ec2:RevokeSecurityGroupEgress", - "ec2:DescribeInstanceTypes", - "ec2:DescribeRegions", - "ec2:DescribeVpcs", - "ec2:DescribeTags", - "ec2:DescribeSubnets", - "ec2:DescribeSecurityGroups", - "ec2:DescribeRouteTables", - "ec2:DescribeLaunchTemplateVersions", - "ec2:DescribeLaunchTemplates", - "ec2:DescribeKeyPairs", - "ec2:DescribeInternetGateways", - "ec2:DescribeImages", - "ec2:DescribeAvailabilityZones", - "ec2:DescribeAccountAttributes", - "ec2:DeleteTags", - "ec2:DeleteSecurityGroup", - "ec2:DeleteKeyPair", - "ec2:CreateTags", - "ec2:CreateSecurityGroup", - "ec2:CreateLaunchTemplateVersion", - "ec2:CreateLaunchTemplate", - "ec2:CreateKeyPair", - "ec2:AuthorizeSecurityGroupIngress", - "ec2:AuthorizeSecurityGroupEgress" - ], - "Resource": "*" - }, - { - "Sid": "CloudFormationPermisssions", - "Effect": "Allow", - "Action": [ - "cloudformation:ListStacks", - "cloudformation:ListStackResources", - "cloudformation:DescribeStacks", - "cloudformation:DescribeStackResources", - "cloudformation:DescribeStackResource", - "cloudformation:DeleteStack", - "cloudformation:CreateStackSet", - "cloudformation:CreateStack" - ], - "Resource": "*" - }, - { - "Sid": "IAMPermissions", - "Effect": "Allow", - "Action": [ - "iam:PassRole", - "iam:ListRoles", - "iam:ListRoleTags", - "iam:ListInstanceProfilesForRole", - "iam:ListInstanceProfiles", - "iam:ListAttachedRolePolicies", - "iam:GetRole", - "iam:GetInstanceProfile", - "iam:DetachRolePolicy", - "iam:DeleteRole", - "iam:CreateRole", - "iam:AttachRolePolicy" - ], - "Resource": "*" - }, - { - "Sid": "KMSPermisssions", - "Effect": "Allow", - "Action": "kms:ListKeys", - "Resource": "*" - }, - { - "Sid": "EKSPermisssions", - "Effect": "Allow", - "Action": [ - "eks:UpdateNodegroupVersion", - "eks:UpdateNodegroupConfig", - "eks:UpdateClusterVersion", - "eks:UpdateClusterConfig", - "eks:UntagResource", - "eks:TagResource", - "eks:ListUpdates", - "eks:ListTagsForResource", - "eks:ListNodegroups", - "eks:ListFargateProfiles", - "eks:ListClusters", - "eks:DescribeUpdate", - "eks:DescribeNodegroup", - "eks:DescribeFargateProfile", - "eks:DescribeCluster", - "eks:DeleteNodegroup", - "eks:DeleteFargateProfile", - "eks:DeleteCluster", - "eks:CreateNodegroup", - "eks:CreateFargateProfile", - "eks:CreateCluster" - ], - "Resource": "*" - } - ] -} -``` - -### 服务角色权限 - -指的是 Rancher 在 EKS 集群创建过程中,代表用户创建服务角色所需的权限。 - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "IAMPermisssions", - "Effect": "Allow", - "Action": [ - "iam:AddRoleToInstanceProfile", - "iam:AttachRolePolicy", - "iam:CreateInstanceProfile", - "iam:CreateRole", - "iam:CreateServiceLinkedRole", - "iam:DeleteInstanceProfile", - "iam:DeleteRole", - "iam:DetachRolePolicy", - "iam:GetInstanceProfile", - "iam:GetRole", - "iam:ListAttachedRolePolicies", - "iam:ListInstanceProfiles", - "iam:ListInstanceProfilesForRole", - "iam:ListRoles", - "iam:ListRoleTags", - "iam:PassRole", - "iam:RemoveRoleFromInstanceProfile" - ], - "Resource": "*" - } - ] -} -``` - -创建 EKS 集群时,Rancher 将创建具有以下信任策略的服务角色: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Action": "sts:AssumeRole", - "Principal": { - "Service": "eks.amazonaws.com" - }, - "Effect": "Allow", - "Sid": "" - } - ] -} -``` - -此角色还将具有两个角色策略,其中包含以下策略 ARN: - -``` -arn:aws:iam::aws:policy/AmazonEKSClusterPolicy -arn:aws:iam::aws:policy/AmazonEKSServicePolicy -``` - -### VPC 权限 - -Rancher 创建 VPC 和关联资源所需的权限。 - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "VPCPermissions", - "Effect": "Allow", - "Action": [ - "ec2:ReplaceRoute", - "ec2:ModifyVpcAttribute", - "ec2:ModifySubnetAttribute", - "ec2:DisassociateRouteTable", - "ec2:DetachInternetGateway", - "ec2:DescribeVpcs", - "ec2:DeleteVpc", - "ec2:DeleteTags", - "ec2:DeleteSubnet", - "ec2:DeleteRouteTable", - "ec2:DeleteRoute", - "ec2:DeleteInternetGateway", - "ec2:CreateVpc", - "ec2:CreateSubnet", - "ec2:CreateSecurityGroup", - "ec2:CreateRouteTable", - "ec2:CreateRoute", - "ec2:CreateInternetGateway", - "ec2:AttachInternetGateway", - "ec2:AssociateRouteTable" - ], - "Resource": "*" - } - ] -} -``` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md deleted file mode 100644 index 21cb5570876..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: 克隆集群 ---- - -如果你在 Rancher 中有一个集群并想将这个集群用作创建集群的模板,你可以使用 Rancher CLI 克隆集群的配置,编辑配置,然后使用这些配置来快速启动克隆的集群。 - -不支持复制已注册的集群。 - -| 集群类型 | 是否可克隆 | -|----------------------------------|---------------| -| [由基础设施提供商托管的节点](../../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md) | ✓ | -| [托管的 Kubernetes 提供商](../../../pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers.md) | ✓ | -| [自定义集群](../../../pages-for-subheaders/use-existing-nodes.md) | ✓ | -| [已注册集群](../../new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md) | | - -:::caution - -在复制集群的过程中,你将编辑一个包含所有集群设置的配置文件。但是,由于集群复制 **_not_** 大规模更改配置,因此我们建议仅编辑本文中明确列出的值。编辑其他值可能会使配置文件失效,从而导致集群部署失败。 - -::: - -## 先决条件 - -下载并安装 [Rancher CLI](../../../pages-for-subheaders/cli-with-rancher.md)。如有必要,请[创建 API 持有者令牌](../../../reference-guides/user-settings/api-keys.md)。 - - -## 1. 导出集群配置 - -首先,使用 Rancher CLI 导出要克隆的集群的配置。 - -1. 打开终端并转到 Rancher CLI 二进制文件所在的位置 `rancher`。 - -1. 运行以下命令以列出 Rancher 管理的集群: - - - ./rancher cluster ls - - -1. 找到要克隆的集群,并将其资源 `ID` 或 `NAME` 复制到剪贴板。从此处开始,我们将资源 `ID` 或 `NAME` 称为 ``,它在接下来用作占位符。 - -1. 运行以下命令以导出集群的配置: - - - ./rancher clusters export - - - **步骤结果**:已将克隆集群的 YAML 打印到终端。 - -1. 将 YAML 粘贴到新文件中。将文件另存为 `cluster-template.yml`(或任何其他名称,确保扩展名是 `.yml` 即可)。 - -## 2. 修改集群配置 - -使用文本编辑器为克隆集群修改 `cluster-template.yml` 中的集群配置。 - -:::note - -集群配置参数必须嵌套在 `cluster.yml` 中的 `rancher_kubernetes_engine_config` 下。有关详细信息,请参阅 [Rancher 2.3.0+ 配置文件结构](../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rancher-中的配置文件结构)。 - -::: - -1. 在文本编辑器中打开 `cluster-template.yml`(或你重命名了的配置文件)。 - - :::caution - - 仅需编辑下面明确指出的集群配置项。此文件中列出的很多值均用于配置克隆的集群,因此编辑它们的值可能会中断配置过程。 - - ::: - - -1. 如下例所示,在 `` 占位符处将原始集群的名称替换为唯一名称 (``)。如果克隆的集群名称重复,则集群将无法成功配置。 - - ```yml - Version: v3 - clusters: - : # 输入唯一的名称 - dockerRootDir: /var/lib/docker - enableNetworkPolicy: false - rancherKubernetesEngineConfig: - addonJobTimeout: 30 - authentication: - strategy: x509 - authorization: {} - bastionHost: {} - cloudProvider: {} - ignoreDockerVersion: true - ``` - -1. 对于每个 `nodePools`,将原始节点池名称替换为 `` 占位符处的唯一名称。如果克隆集群具有重复的节点池名称,则集群将无法成功配置。 - - ```yml - nodePools: - : - clusterId: do - controlPlane: true - etcd: true - hostnamePrefix: mark-do - nodeTemplateId: do - quantity: 1 - worker: true - ``` - -1. 完成后,保存并关闭配置。 - -## 3. 启动克隆的集群 - -将 `cluster-template.yml` 移动到 Rancher CLI 二进制文件所在的目录中。然后运行这个命令: - - ./rancher up --file cluster-template.yml - -**结果**:开始配置你克隆的集群。输入 `./rancher cluster ls` 进行确认。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/pages-for-subheaders/amazon-eks-permissions.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/pages-for-subheaders/amazon-eks-permissions.md deleted file mode 100644 index 6de98186062..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/pages-for-subheaders/amazon-eks-permissions.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: 创建 EKS 集群 ---- -Amazon EKS 为 Kubernetes 集群提供托管的 controlplane。Amazon EKS 跨多个可用区运行 Kubernetes controlplane 实例,以确保高可用性。Rancher 提供了一个直观的用户界面,用于管理和部署你运行在 Amazon EKS 中的 Kubernetes 集群。通过本指南,你将使用 Rancher 在你的 AWS 账户中快速轻松地启动 Amazon EKS Kubernetes 集群。有关 Amazon EKS 的更多信息,请参阅此[文档](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)。 - - -## Amazon Web 服务的先决条件 - -:::caution - -部署到 Amazon AWS 会产生费用。有关详细信息,请参阅 [EKS 定价页面](https://aws.amazon.com/eks/pricing/)。 - -::: - -要在 EKS 上设置集群,你需要设置 Amazon VPC(虚拟私有云)。你还需要确保用于创建 EKS 集群的账号具有适当的[权限](#最小-eks-权限)。详情请参阅 [Amazon EKS 先决条件官方指南](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-prereqs)。 - -### Amazon VPC - -你需要建立一个 Amazon VPC 来启动 EKS 集群。VPC 使你能够将 AWS 资源启动到你定义的虚拟网络中。你可以自己设置一个 VPC,并在 Rancher 中创建集群时提供它。如果你创建过程中没有提供,Rancher 将创建一个 VPC。详情请参阅[教程:为你的 Amazon EKS 集群创建具有公有和私有子网的 VPC](https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html)。 - -### IAM 策略 - -Rancher 需要访问你的 AWS 账户才能在 Amazon EKS 中预置和管理你的 Kubernetes 集群。你需要在 AWS 账户中为 Rancher 创建一个用户,并定义该用户可以访问的内容。 - -1. 按照[此处](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html)的步骤创建具有编程访问权限的用户。 - -2. 创建一个 IAM 策略,定义该用户在 AWS 账户中有权访问的内容。请务必仅授予此用户所需的最小访问权限。[此处](#最小-eks-权限)列出了 EKS 集群所需的最低权限。请按照[此处](https://docs.aws.amazon.com/eks/latest/userguide/EKS_IAM_user_policies.html)的步骤创建 IAM 策略并将策略绑定到你的用户。 - -3. 最后,按照[此处](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey)的步骤为该用户创建访问密钥和密文密钥。 - -:::note 重要提示: - -定期轮换访问密钥和密文密钥非常重要。有关详细信息,请参阅此[文档](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#rotating_access_keys_console)。 - -::: - -有关 EKS 的 IAM 策略的更多详细信息,请参阅 [Amazon EKS IAM 策略、角色和权限的官方文档](https://docs.aws.amazon.com/eks/latest/userguide/IAM_policies.html)。 - - -## 创建 EKS 集群 - -使用 Rancher 配置你的 Kubernetes 集群。 - -1. 点击 **☰ > 集群管理**。 -1. 在**集群**页面上,单击**创建**。 -1. 选择 **Amazon EKS**。 -1. 输入**集群名称**。 -1. 使用**成员角色**为集群配置用户授权。点击**添加成员**添加可以访问集群的用户。使用**角色**下拉菜单为每个用户设置权限。 -1. 完成表单的其余部分。如需帮助,请参阅[配置参考](#eks-集群配置参考)。 -1. 单击**创建**。 - -**结果**: - -你已创建集群,集群的状态是**配置中**。Rancher 已在你的集群中。 - -当集群状态变为 **Active** 后,你可访问集群。 - -**Active** 状态的集群会分配到两个项目: - -- `Default`:包含 `default` 命名空间 -- `System`:包含 `cattle-system`,`ingress-nginx`,`kube-public` 和 `kube-system` 命名空间。 - -## EKS 集群配置参考 - -有关 EKS 集群配置选项的完整列表,请参阅[此页面](../reference-guides/cluster-configuration/rancher-server-configuration/eks-cluster-configuration.md)。 - -## 架构 - -下图展示了 Rancher 2.x 的上层架构。下图中,Rancher Server 管理两个 Kubernetes 集群,其中一个由 RKE 创建,另一个由 EKS 创建。 - -
通过 Rancher 的认证代理管理 Kubernetes 集群
- -![架构](/img/rancher-architecture-rancher-api-server.svg) - -## AWS 服务事件 - -有关 AWS 服务事件的信息,请参阅[此页面](https://status.aws.amazon.com/)。 - -## 安全与合规 - -默认情况下,只有创建集群的 IAM 用户或角色才能访问该集群。在没有额外配置的情况下,使用其他用户或角色访问集群将导致错误。在 Rancher 中,这意味着使用映射到未用于创建集群的用户或角色的凭证,导致未经授权的错误。除非用于注册集群的凭证与 EKSCtl 使用的角色或用户匹配,否则 EKSCtl 集群将不会注册到 Rancher。通过将其他用户和角色添加到 kube-system 命名空间中的 aws-auth configmap,可以授权其他用户和角色访问集群。如需更深入的解释和详细说明,请参阅此[文档](https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/)。 - -有关 Amazon EKS Kubernetes 集群的安全性和合规性的更多信息,请参阅此[文档](https://docs.aws.amazon.com/eks/latest/userguide/shared-responsibilty.html)。 - -## 教程 - -AWS 开源博客上的这篇[教程](https://aws.amazon.com/blogs/opensource/managing-eks-clusters-rancher/)将指导你使用 Rancher 设置一个 EKS 集群,部署一个可公开访问的示例应用来测试集群,并部署一个使用其他开源软件(如 Grafana 和 influxdb)来实时监控地理信息的示例项目。 - -## 最小 EKS 权限 - -请参阅[此页面](../reference-guides/amazon-eks-permissions/minimum-eks-permissions.md),了解在 Rancher 中使用 EKS 驱动所有功能所需的最小权限。 - -## 同步 - -EKS 配置者可以在 Rancher 和提供商之间同步 EKS 集群的状态。有关其工作原理的技术说明,请参阅[同步](../reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters.md)。 - -有关配置刷新间隔的信息,请参阅[本节](../reference-guides/cluster-configuration/rancher-server-configuration/eks-cluster-configuration.md#配置刷新间隔)。 - -## 故障排除 - -如果你的更改被覆盖,可能是集群数据与 EKS 同步的方式导致的。不要在使用其他源(例如 EKS 控制台)对集群进行更改后,又在五分钟之内在 Rancher 中进行更改。有关其工作原理,以及如何配置刷新间隔的信息,请参阅[同步](#同步)。 - -如果在修改或注册集群时返回未经授权的错误,并且集群不是使用你的凭证所属的角色或用户创建的,请参阅[安全与合规](#安全与合规)。 - -有关 Amazon EKS Kubernetes 集群的任何问题或故障排除详细信息,请参阅此[文档](https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html)。 - -## 以编程方式创建 EKS 集群 - -通过 Rancher 以编程方式部署 EKS 集群的最常见方法是使用 Rancher 2 Terraform Provider。详情请参见[使用 Terraform 创建集群](https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/reference-guides/amazon-eks-permissions/minimum-eks-permissions.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/reference-guides/amazon-eks-permissions/minimum-eks-permissions.md deleted file mode 100644 index f2a503dcced..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/reference-guides/amazon-eks-permissions/minimum-eks-permissions.md +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: 最小 EKS 权限 ---- - -此处提供在 Rancher 中使用 EKS 驱动所有功能所需的最小权限。Rancher 需要额外的权限来配置`服务角色`和 `VPC` 资源。你可以选择在创建集群**之前**创建这些资源,以便在定义集群配置时选择这些资源。 - -| 资源 | 描述 | ----------|------------ -| 服务角色 | 服务角色向 Kubernetes 提供管理资源所需的权限。Rancher 可以使用以下[服务角色权限](#服务角色权限)来创建服务角色。 | -| VPC | 提供 EKS 和 Worker 节点使用的隔离网络资源。Rancher 使用以下 [VPC 权限](#vpc-权限)创建 VPC 资源。 | - - -资源定位使用 `*` 作为在 Rancher 中创建 EKS 集群之前,无法已知创建的资源的名称(ARN)。 - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "EC2Permisssions", - "Effect": "Allow", - "Action": [ - "ec2:RunInstances", - "ec2:RevokeSecurityGroupIngress", - "ec2:RevokeSecurityGroupEgress", - "ec2:DescribeInstanceTypes", - "ec2:DescribeRegions", - "ec2:DescribeVpcs", - "ec2:DescribeTags", - "ec2:DescribeSubnets", - "ec2:DescribeSecurityGroups", - "ec2:DescribeRouteTables", - "ec2:DescribeLaunchTemplateVersions", - "ec2:DescribeLaunchTemplates", - "ec2:DescribeKeyPairs", - "ec2:DescribeInternetGateways", - "ec2:DescribeImages", - "ec2:DescribeAvailabilityZones", - "ec2:DescribeAccountAttributes", - "ec2:DeleteTags", - "ec2:DeleteSecurityGroup", - "ec2:DeleteKeyPair", - "ec2:CreateTags", - "ec2:CreateSecurityGroup", - "ec2:CreateLaunchTemplateVersion", - "ec2:CreateLaunchTemplate", - "ec2:CreateKeyPair", - "ec2:AuthorizeSecurityGroupIngress", - "ec2:AuthorizeSecurityGroupEgress" - ], - "Resource": "*" - }, - { - "Sid": "CloudFormationPermisssions", - "Effect": "Allow", - "Action": [ - "cloudformation:ListStacks", - "cloudformation:ListStackResources", - "cloudformation:DescribeStacks", - "cloudformation:DescribeStackResources", - "cloudformation:DescribeStackResource", - "cloudformation:DeleteStack", - "cloudformation:CreateStackSet", - "cloudformation:CreateStack" - ], - "Resource": "*" - }, - { - "Sid": "IAMPermissions", - "Effect": "Allow", - "Action": [ - "iam:PassRole", - "iam:ListRoles", - "iam:ListRoleTags", - "iam:ListInstanceProfilesForRole", - "iam:ListInstanceProfiles", - "iam:ListAttachedRolePolicies", - "iam:GetRole", - "iam:GetInstanceProfile", - "iam:DetachRolePolicy", - "iam:DeleteRole", - "iam:CreateRole", - "iam:AttachRolePolicy" - ], - "Resource": "*" - }, - { - "Sid": "KMSPermisssions", - "Effect": "Allow", - "Action": "kms:ListKeys", - "Resource": "*" - }, - { - "Sid": "EKSPermisssions", - "Effect": "Allow", - "Action": [ - "eks:UpdateNodegroupVersion", - "eks:UpdateNodegroupConfig", - "eks:UpdateClusterVersion", - "eks:UpdateClusterConfig", - "eks:UntagResource", - "eks:TagResource", - "eks:ListUpdates", - "eks:ListTagsForResource", - "eks:ListNodegroups", - "eks:ListFargateProfiles", - "eks:ListClusters", - "eks:DescribeUpdate", - "eks:DescribeNodegroup", - "eks:DescribeFargateProfile", - "eks:DescribeCluster", - "eks:DeleteNodegroup", - "eks:DeleteFargateProfile", - "eks:DeleteCluster", - "eks:CreateNodegroup", - "eks:CreateFargateProfile", - "eks:CreateCluster" - ], - "Resource": "*" - } - ] -} -``` - -### 服务角色权限 - -指的是 Rancher 在 EKS 集群创建过程中,代表用户创建服务角色所需的权限。 - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "IAMPermisssions", - "Effect": "Allow", - "Action": [ - "iam:AddRoleToInstanceProfile", - "iam:AttachRolePolicy", - "iam:CreateInstanceProfile", - "iam:CreateRole", - "iam:CreateServiceLinkedRole", - "iam:DeleteInstanceProfile", - "iam:DeleteRole", - "iam:DetachRolePolicy", - "iam:GetInstanceProfile", - "iam:GetRole", - "iam:ListAttachedRolePolicies", - "iam:ListInstanceProfiles", - "iam:ListInstanceProfilesForRole", - "iam:ListRoles", - "iam:ListRoleTags", - "iam:PassRole", - "iam:RemoveRoleFromInstanceProfile" - ], - "Resource": "*" - } - ] -} -``` - -创建 EKS 集群时,Rancher 将创建具有以下信任策略的服务角色: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Action": "sts:AssumeRole", - "Principal": { - "Service": "eks.amazonaws.com" - }, - "Effect": "Allow", - "Sid": "" - } - ] -} -``` - -此角色还将具有两个角色策略,其中包含以下策略 ARN: - -``` -arn:aws:iam::aws:policy/AmazonEKSClusterPolicy -arn:aws:iam::aws:policy/AmazonEKSServicePolicy -``` - -### VPC 权限 - -Rancher 创建 VPC 和关联资源所需的权限。 - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "VPCPermissions", - "Effect": "Allow", - "Action": [ - "ec2:ReplaceRoute", - "ec2:ModifyVpcAttribute", - "ec2:ModifySubnetAttribute", - "ec2:DisassociateRouteTable", - "ec2:DetachInternetGateway", - "ec2:DescribeVpcs", - "ec2:DeleteVpc", - "ec2:DeleteTags", - "ec2:DeleteSubnet", - "ec2:DeleteRouteTable", - "ec2:DeleteRoute", - "ec2:DeleteInternetGateway", - "ec2:CreateVpc", - "ec2:CreateSubnet", - "ec2:CreateSecurityGroup", - "ec2:CreateRouteTable", - "ec2:CreateRoute", - "ec2:CreateInternetGateway", - "ec2:AttachInternetGateway", - "ec2:AssociateRouteTable" - ], - "Resource": "*" - } - ] -} -``` From fc9a53e4d35f600f0c814cb3f612d9b60a15a910 Mon Sep 17 00:00:00 2001 From: vickyhella Date: Tue, 28 Mar 2023 10:59:48 +0800 Subject: [PATCH 17/22] Copy v2.7 EN hardening guides to the ZH folder --- .../k3s-hardening-guide-with-cis-benchmark.md | 549 +++ ...sessment-guide-with-cis-v1.20-benchmark.md | 3133 ++++++++++++++++ ...sessment-guide-with-cis-v1.23-benchmark.md | 3143 +++++++++++++++++ 3 files changed, 6825 insertions(+) create mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-hardening-guide-with-cis-benchmark.md create mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-self-assessment-guide-with-cis-v1.20-benchmark.md create mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-self-assessment-guide-with-cis-v1.23-benchmark.md diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-hardening-guide-with-cis-benchmark.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-hardening-guide-with-cis-benchmark.md new file mode 100644 index 00000000000..dbcd25f520b --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-hardening-guide-with-cis-benchmark.md @@ -0,0 +1,549 @@ +--- +title: K3s Hardening Guide with CIS Benchmark +--- + +This document provides prescriptive guidance for hardening a production installation of a K3s cluster to be provisioned with Rancher v2.7. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). + +:::note + +This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes. + +::: + +This hardening guide is intended to be used for K3s clusters and associated with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +| Rancher Version | CIS Benchmark Version | Kubernetes Version | +| --------------- | --------------------- | ------------------ | +| Rancher v2.7 | Benchmark v1.20 | Kubernetes v1.21 | +| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.22 up to v1.24 | + +### Overview + +For more details about evaluating a hardened K3s cluster against the official CIS benchmark, refer to K3s - CIS Benchmark - Self-Assessment Guide - Rancher v2.7 for [CIS v1.20](k3s-self-assessment-guide-with-cis-v1.20-benchmark.md) and [CIS v1.23](k3s-self-assessment-guide-with-cis-v1.23-benchmark.md). + +K3s has a number of security mitigations applied and turned on by default and will pass a number of the Kubernetes CIS controls without modification. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark: + +1. K3s will not modify the host operating system. Any host-level modifications will need to be done manually. +2. Certain CIS policy controls for `PodSecurityPolicies` and `NetworkPolicies` will restrict the functionality of the cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further details are presented in the sections below. + +The first section (1.1) of the CIS Benchmark concerns itself primarily with pod manifest permissions and ownership. K3s doesn't utilize these for the core components since everything is packaged into a single binary. + +## Host-level Requirements + +### Ensure `protect-kernel-defaults` is set (control 4.2.6) + +This is a kubelet flag that will cause the kubelet to exit if the required kernel parameters are unset or are set to values that are different from the kubelet's defaults. + +This can be remediated by adding the following argument line to K3s cluster configuration file: + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + protect-kernel-defaults: true # Control 4.2.6 +``` + +#### Set kernel parameters + +Create a file called `/etc/sysctl.d/90-kubelet.conf` and add the snippet below. Then run `sysctl -p /etc/sysctl.d/90-kubelet.conf`. + +```bash +vm.panic_on_oom=0 +vm.overcommit_memory=1 +kernel.panic=10 +kernel.panic_on_oops=1 +``` + +This configuration needs to be done before setting the kubelet flag, otherwise K3s will fail to start. + +## Kubernetes Runtime Requirements + +The runtime requirements to comply with the CIS Benchmark are centered around pod security policies (PSPs) and its admission control plugin, network policies and API Server auditing logs. These are outlined in this section. K3s doesn't apply any default PSPs or network policies. However, K3s ships with a controller that is meant to apply a given set of network policies. By default, K3s runs with the `NodeRestriction` admission controller. To enable PSPs, add the following line to K3s cluster configuration file: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + kube-apiserver-arg: + - enable-admission-plugins=NodeRestriction,PodSecurityPolicy # CIS 1.2.16 and CIS 5.2 +``` + +This will have the effect of maintaining the `NodeRestriction` plugin as well as enabling the `PodSecurityPolicy`. + +### Pod Security Policies (control 5.2) + +When PSPs are enabled, a policy can be applied to satisfy the necessary controls described in section 5.2 of the CIS Benchmark. + +Here is an example of a compliant PSP. + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: restricted-psp +spec: + privileged: false # CIS - 5.2.1 + allowPrivilegeEscalation: false # CIS - 5.2.5 + requiredDropCapabilities: # CIS - 5.2.7/8/9 + - ALL + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'csi' + - 'persistentVolumeClaim' + - 'ephemeral' + hostNetwork: false # CIS - 5.2.4 + hostIPC: false # CIS - 5.2.3 + hostPID: false # CIS - 5.2.2 + runAsUser: + rule: 'MustRunAsNonRoot' # CIS - 5.2.6 + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +``` + +For the above PSP to be effective, we need to create a `ClusterRole` and a `ClusterRoleBinding`. We also need to include a "system unrestricted policy" which is needed for system-level pods that require additional privileges, as exemplified below. + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: restricted-psp +spec: + privileged: false + allowPrivilegeEscalation: false + requiredDropCapabilities: + - ALL + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'csi' + - 'persistentVolumeClaim' + - 'ephemeral' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'MustRunAsNonRoot' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: psp:restricted-psp + labels: + addonmanager.kubernetes.io/mode: EnsureExists +rules: +- apiGroups: ['extensions'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - restricted-psp +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: default:restricted-psp + labels: + addonmanager.kubernetes.io/mode: EnsureExists +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted-psp +subjects: +- kind: Group + name: system:authenticated + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: system-unrestricted-psp +spec: + allowPrivilegeEscalation: true + allowedCapabilities: + - '*' + fsGroup: + rule: RunAsAny + hostIPC: true + hostNetwork: true + hostPID: true + hostPorts: + - max: 65535 + min: 0 + privileged: true + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - '*' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: system-unrestricted-node-psp-rolebinding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system-unrestricted-psp-role +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:nodes +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: system-unrestricted-psp-role +rules: +- apiGroups: + - policy + resourceNames: + - system-unrestricted-psp + resources: + - podsecuritypolicies + verbs: + - use +``` + +The policy file `policy.yaml` can be placed in the `/var/lib/rancher/k3s/server/manifests` directory. Both the policy file and the its directory hierarchy, ideally, must be created before starting K3s. A restrictive access permission is recommended to avoid leaking potential sensitive information. + +```bash +sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/manifests +``` + +### Network Policies (control 5.3.2) + +CIS requires that all namespaces have a network policy applied that reasonably limits traffic into namespaces and pods. + +:::note + +This is a manual check in the CIS Benchmark. The CIS scan will flag the result as `warning`, because manual inspection is necessary by the cluster operator. + +::: + +Here is an example of a compliant network policy. + +```yaml +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-system +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-system +``` + +:::note + +Kubernetes' additions such as CNI, DNS, and Ingress are ran as pods in the `kube-system` namespace. Therefore, this namespace will have a policy that is less restrictive so that these components can run properly. + +::: + +With the applied restrictions, DNS will be blocked unless purposely allowed. Below is a network policy that will allow DNS related traffic. + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-network-dns-policy + namespace: +spec: + ingress: + - ports: + - port: 53 + protocol: TCP + - port: 53 + protocol: UDP + podSelector: + matchLabels: + k8s-app: kube-dns + policyTypes: + - Ingress +``` + +The metrics-server and Traefik ingress controller will be blocked by default if network policies are not created to allow access. + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-metrics-server + namespace: kube-system +spec: + podSelector: + matchLabels: + k8s-app: metrics-server + ingress: + - {} + policyTypes: + - Ingress +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-svclbtraefik-ingress + namespace: kube-system +spec: + podSelector: + matchLabels: + svccontroller.k3s.cattle.io/svcname: traefik + ingress: + - {} + policyTypes: + - Ingress +``` + +:::note + +Operators must manage network policies as normal for additional namespaces that are created. + +::: + +The network policies can be added in the same policy file used for PSPs in `/var/lib/rancher/k3s/server/manifests/policy.yaml` or on its own file. + +### API Server audit configuration + +CIS requirements v1.20 - 1.2.22 to 1.2.25 and v1.23 - 1.2.19 to 1.2.22 are related to configuring audit logs for the API Server. K3s doesn't create by default the log directory and audit policy, as auditing requirements are specific to each user's policies and environment. + +The log directory, ideally, must be created before starting K3s. A restrictive access permission is recommended to avoid leaking potential sensitive information. + +```bash +sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/logs +``` + +A starter audit policy to log request metadata is provided below. The policy should be written to a file named `audit.yaml` in `/var/lib/rancher/k3s/server` directory. Detailed information about policy configuration for the API server can be found in the Kubernetes [documentation](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/). + +```yaml +apiVersion: audit.k8s.io/v1 +kind: Policy +rules: +- level: Metadata +``` + +Further configurations, as described below, are also needed to pass CIS checks and are not configured by default in K3s, because they vary depending on the users' environment and needs: + +- Ensure that the `--audit-log-path` argument is set. +- Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate. +- Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate. +- Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate. + +To enable and configure audit logs, add the following line to K3s cluster configuration file: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + kube-apiserver-arg: + - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS v1.20/v1.23 3.2.1 + - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS v1.20 1.2.21 and CIS v1.23 1.2.19 + - audit-log-maxage=30 # CIS v1.20 1.2.22 and CIS v1.23 1.2.20 + - audit-log-maxbackup=10 # CIS v1.20 1.2.23 and CIS v1.23 1.2.21 + - audit-log-maxsize=100 # CIS v1.20 1.2.24 and CIS v1.23 1.2.22 +``` + +## Known Issues + +The following are controls that K3s currently does not pass by default. Each gap will be explained, along with a note clarifying whether it can be passed through manual operator intervention, or if it will be addressed in a future release of K3s. + +### Control CIS v1.20 - 1.2.13 / CIS v1.23 - 1.2.14 +Ensure that the admission control plugin `ServiceAccount` is set +
+Rationale +Follow the documentation and create `ServiceAccount` objects as per your environment. Then, edit the API server pod specification file $apiserverconf +on the control plane node and ensure that the `--disable-admission-plugins` parameter is set to a value that does not include `ServiceAccount`. + +This can be remediated by adding the following argument line to K3s cluster configuration file: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + kube-apiserver-arg: + kube-apiserver-arg: + - enable-admission-plugins=ServiceAccount # CIS v1.20 - 1.2.13 / CIS v1.23 - 1.2.14 +``` +
+ +### Control CIS v1.20 1.2.26 and CIS v1.23 1.2.23 +Ensure that the `--request-timeout` argument is set as appropriate. +
+Rationale +Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user's connection speed. By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed. + +This can be remediated by adding the following argument line to K3s cluster configuration file: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + kube-apiserver-arg: + - request-timeout=300s # Control CIS v1.20 1.2.26 and CIS v1.23 1.2.23 +``` +
+ +### Control CIS v1.23 1.2.24 +Ensure that the `--service-account-lookup` argument is set to true. +
+Rationale +If `--service-account-lookup` is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue. + +This can be remediated by adding the following argument line to K3s cluster configuration file: + +```yaml +spec: + rkeConfig: + machineGlobalConfig: + kube-apiserver-arg: + - service-account-lookup=true # Control CIS v1.23 1.2.24 +``` +
+ +### CIS v1.20 1.2.32 and CIS v1.23 1.2.30 +Ensure that the `--encryption-provider-config` argument is set as appropriate. +
+Rationale +`etcd` is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. + +Detailed steps on how to configure secrets encryption in K3s are available in [K3s Secrets Encryption Config](https://docs.k3s.io/security/secrets-encryption). +
+ +### Control CIS v1.20 1.2.33 and CIS v1.23 1.2.31 +Ensure that encryption providers are appropriately configured. +
+Rationale +Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the `aescbc`, `kms` and `secretbox` are likely to be appropriate options. + +This can be remediated by passing a valid configuration to `k3s` as outlined above. Detailed steps on how to configure secrets encryption in K3s are available in [K3s Secrets Encryption Config](https://docs.k3s.io/security/secrets-encryption). +
+ +### Control 4.2.7 +Ensure that the `--make-iptables-util-chains` argument is set to true. +
+Rationale +Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open. + +This can be remediated by adding the following argument line to K3s cluster configuration file: + +```yaml +spec: + rkeConfig: + machineSelectorConfig: + - config: + kubelet-arg: + - make-iptables-util-chains=true # Control 4.2.7 +``` +
+ +### Control 5.1.5 +Ensure that default service accounts are not actively used +
+Rationale +Kubernetes provides a `default` service account which is used by cluster workloads where no specific service account is assigned to the pod. + +Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. + +The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. + +This can be remediated by updating the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace. + +For `default` service accounts in the built-in namespaces (`kube-system`, `kube-public`, `kube-node-lease`, and `default`), K3s does not automatically do this. You can manually update this field on these service accounts to pass the control or use the script below to automate this task. + +Save the follow configuration to a file called `account_update.yaml`. + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default +automountServiceAccountToken: false +``` + +Create a bash script file called `account_update.sh`. Be sure to `sudo chmod +x account_update.sh` so the script has execute permissions. + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + echo -n "Patching namespace $namespace - " + kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" +done +``` + +Execute the script to update the default service account in each namespace. +
+ + +### Reference Hardened K3s Template Configuration + +The reference template configuration below is used in Rancher to create a hardened K3s custom cluster based on each CIS control presented in this guide. This reference does not include other required **cluster configuration** directives which will vary depending on your environment. + +```yaml +apiVersion: provisioning.cattle.io/v1 +kind: Cluster +metadata: + name: # Define cluster name + annotations: +spec: + defaultPodSecurityPolicyTemplateName: '' # Define the PSP policy to use + enableNetworkPolicy: true + kubernetesVersion: # Define K3s version + rkeConfig: + machineGlobalConfig: + kube-apiserver-arg: + - enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount # CIS 1.2.16, CIS 5.2 and CIS v1.20 - 1.2.13 / CIS v1.23 - 1.2.14 + - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS v1.20/v1.23 3.2.1 + - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS v1.20 1.2.21 and CIS v1.23 1.2.19 + - audit-log-maxage=30 # CIS v1.20 1.2.22 and CIS v1.23 1.2.20 + - audit-log-maxbackup=10 # CIS v1.20 1.2.23 and CIS v1.23 1.2.21 + - audit-log-maxsize=100 # CIS v1.20 1.2.24 and CIS v1.23 1.2.22 + - request-timeout=300s # Control CIS v1.20 1.2.26 and CIS v1.23 1.2.23 + - service-account-lookup=true # Control CIS v1.23 1.2.24 + secrets-encryption: true + machineSelectorConfig: + - config: + kubelet-arg: + - make-iptables-util-chains=true # Control 4.2.7 + protect-kernel-defaults: true # Control 4.2.6 +``` + +### Conclusion + +If you have followed this guide, your K3s custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our K3s CIS Benchmark Self-Assessment Guide for [CIS v1.20](k3s-self-assessment-guide-with-cis-v1.20-benchmark.md) and [CIS v1.23](k3s-self-assessment-guide-with-cis-v1.23-benchmark.md) to understand how we verified each of the benchmarks and how you can do the same on your cluster. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-self-assessment-guide-with-cis-v1.20-benchmark.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-self-assessment-guide-with-cis-v1.20-benchmark.md new file mode 100644 index 00000000000..cf5c22b97d9 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-self-assessment-guide-with-cis-v1.20-benchmark.md @@ -0,0 +1,3133 @@ +--- +title: K3s CIS v1.20 Benchmark - Self-Assessment Guide - Rancher v2.7 +--- + +### K3s CIS Kubernetes Benchmark v1.20 - K3s with Kubernetes v1.21 + +#### Overview + +This document is a companion to the [Rancher v2.7 K3s security hardening guide](k3s-hardening-guide-with-cis-benchmark.md). The hardening guide provides prescriptive guidance for hardening a production installation of Rancher with K3s provisioned clusters, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes Benchmark. + +This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark and Kubernetes: + +| Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version | +| ----------------------- | --------------- | --------------------- | ------------------- | +| Hardening Guide | Rancher v2.7 | CIS v1.20 | Kubernetes v1.21 | + +This document is to be used by Rancher operators, security teams, auditors and decision makers. + +For more information about each control, including detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.20. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). + +#### Testing controls methodology + +Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide. + +Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing. + +These are the possible results for each control: + +- **Pass** - The K3s cluster under test passed the audit outlined in the benchmark. +- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section will explain why this is so. +- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. + +This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary and will require you to adjust the "audit" commands to fit your scenario. + +:::note + +Only `automated` tests (previously called `scored`) are covered in this guide. + +::: + +### Controls + +--- + +## 1.1 Master Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +master node. +For example, chmod 644 /etc/kubernetes/manifests/kube-apiserver.yaml + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 /etc/kubernetes/manifests/kube-controller-manager.yaml + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 /etc/kubernetes/manifests/kube-scheduler.yaml + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 /etc/kubernetes/manifests/etcd.yaml + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 'path/to/cni/files' + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root 'path/to/cni/files' + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the below command: +ps -ef | grep etcd +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 1.1.11 +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** Not Applicable + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the below command: +ps -ef | grep etcd +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +### 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 /var/lib/rancher/k3s/server/cred/admin.kubeconfig + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/admin.conf + +**Audit:** + +```bash +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi' +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 scheduler + +**Audit:** + +```bash +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 644 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root scheduler + +**Audit:** + +```bash +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 controllermanager + +**Audit:** + +```bash +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 644 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root controllermanager + +**Audit:** + +```bash +stat -c %U:%G /var/lib/rancher/k3s/server/tls +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit:** + +```bash +find /var/lib/rancher/k3s/server/tls | xargs stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod -R 644 /etc/kubernetes/pki/*.crt + +**Audit:** + +```bash +stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod -R 600 /etc/kubernetes/pki/*.key + +**Audit:** + +```bash +stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth' +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and remove the `--token-auth-file='filename'` parameter. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.3 Ensure that the --kubelet-https argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and remove the --kubelet-https parameter. + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate='path/to/client-certificate-file' +--kubelet-client-key='path/to/client-key-file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority' +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority='ca-string' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority' +``` + +**Expected Result**: + +```console +'--kubelet-certificate-authority' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --authorization-mode parameter to a value that includes RBAC, +for example: +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file='path/to/configuration/file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'ServiceAccount' +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.15 Ensure that the admission control plugin PodSecurityPolicy is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create Pod Security Policy objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --enable-admission-plugins parameter to a +value that includes PodSecurityPolicy: +--enable-admission-plugins=...,PodSecurityPolicy,... +Then restart the API Server. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.17 Ensure that the --insecure-bind-address argument is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and remove the --insecure-bind-address parameter. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep +``` + +**Expected Result**: + +```console +'--insecure-bind-address' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.18 Ensure that the --insecure-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--insecure-port=0 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 +``` + +**Expected Result**: + +```console +'--insecure-port' is present OR '--insecure-port' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.19 Ensure that the --secure-port argument is not set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port' +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.20 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling' +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.21 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example: +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-path' +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.22 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --audit-log-maxage parameter to 30 or as an appropriate number of days: +--audit-log-maxage=30 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxage' +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.23 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. +--audit-log-maxbackup=10 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxbackup' +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.24 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB: +--audit-log-maxsize=100 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxsize' +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.26 Ensure that the --request-timeout argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-lookup' +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.27 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --service-account-key-file parameter +to the public key file for service accounts: +--service-account-key-file='filename' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-key-file' +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.28 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the etcd certificate and key file parameters. +--etcd-certfile='path/to/client-certificate-file' +--etcd-keyfile='path/to/client-key-file' + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 1.2.29 +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.29 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the TLS certificate and private key file parameters. +--tls-cert-file='path/to/tls-certificate-file' +--tls-private-key-file='path/to/tls-key-file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2 +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259" +``` + +### 1.2.30 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the client certificate authority file. +--client-ca-file='path/to/client-ca-file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.31 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the etcd certificate authority file parameter. +--etcd-cafile='path/to/ca-file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile' +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.32 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --encryption-provider-config parameter to the path of that file: --encryption-provider-config='/path/to/EncryptionConfig/File' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config' +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.33 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +grep aescbc /path/to/encryption-config.json +``` + +### 1.2.34 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM +_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM +_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM +_SHA384 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites' +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example: +--terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold' +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling' +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials' +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file='filename' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file' +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file='path/to/file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file' +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'bind-address' +``` + +**Expected Result**: + +```console +'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the master node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259" +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the master node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address' +``` + +**Expected Result**: + +```console +'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259" +``` + +## 2 Etcd Node Configuration Files +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file='/path/to/ca-file' +--key-file='/path/to/key-file' + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.1 +``` + +**Expected Result**: + +```console +'cert-file' is present AND 'key-file' is present +``` + +**Returned Value**: + +```console +cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.2 +``` + +**Expected Result**: + +```console +'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +client-cert-auth: true +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.3 +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +error: process ID list syntax error Usage: ps [options] Try 'ps --help 'simple|list|output|threads|misc|all'' or 'ps --help 's|l|o|t|m|a'' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the +master node and set the below parameters. +--peer-client-file='/path/to/peer-cert-file' +--peer-key-file='/path/to/peer-key-file' + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.4 +``` + +**Expected Result**: + +```console +'cert-file' is present AND 'key-file' is present +``` + +**Returned Value**: + +```console +cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.5 +``` + +**Expected Result**: + +```console +'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +client-cert-auth: true +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.6 +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +error: process ID list syntax error Usage: ps [options] Try 'ps --help 'simple|list|output|threads|misc|all'' or 'ps --help 's|l|o|t|m|a'' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the +master node and set the below parameter. +--trusted-ca-file='/path/to/ca-file' + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.7 +``` + +**Expected Result**: + +```console +'trusted-ca-file' is present +``` + +**Returned Value**: + +```console +trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Manual) + + +**Result:** warn + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file' +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Consider modification of the audit policy in use on the cluster to include these items, at a +minimum. + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 644 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig + +**Audit:** + +```bash +stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +``` + +**Expected Result**: + +```console +'permissions' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present +``` + +**Returned Value**: + +```console +644 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig + +**Audit:** + +```bash +stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +``` + +**Expected Result**: + +```console +'root:root' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 644 /var/lib/rancher/k3s/server/cred/admin.kubeconfig + +**Audit:** + +```bash +stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig +``` + +**Expected Result**: + +```console +'644' is equal to '644' +``` + +**Returned Value**: + +```console +644 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig + +**Audit:** + +```bash +stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 644 'filename' + +**Audit:** + +```bash +stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt +``` + +**Expected Result**: + +```console +'644' is equal to '644' OR '640' is present OR '600' is present OR '444' is present OR '440' is present OR '400' is present OR '000' is present +``` + +**Returned Value**: + +```console +644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root 'filename' + +**Audit:** + +```bash +stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 644 /var/lib/kubelet/config.yaml + +### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml + +## 4.2 Kubelet +### 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to +false. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--anonymous-auth=false +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth' | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file='path/to/client-ca-file' +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver'| tail -n1 | grep 'client-ca-file' | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set readOnlyPort to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:19 node-01 k3s[15833]: time="2022-10-04T22:42:19Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available'5%,nodefs.available'5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=node-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=c94666fd-3d6a-40a4-81d7-7a14f6ec1117 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** warn + +**Remediation:** +If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout' +``` + +### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set protectKernelDefaults: true. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--protect-kernel-defaults=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults' +``` + +**Expected Result**: + +```console +'--protect-kernel-defaults' is equal to 'true' +``` + +**Returned Value**: + +```console +Oct 04 22:42:19 node-01 k3s[15833]: time="2022-10-04T22:42:19Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available'5%,nodefs.available'5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=node-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=c94666fd-3d6a-40a4-81d7-7a14f6ec1117 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:19 node-01 k3s[15833]: time="2022-10-04T22:42:19Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available'5%,nodefs.available'5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=node-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=c94666fd-3d6a-40a4-81d7-7a14f6ec1117 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.8 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual) + + +**Result:** warn + +**Remediation:** +If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC containerd +``` + +### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set tlsCertFile to the location +of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file='path/to/tls-certificate-file' +--tls-private-key-file='path/to/tls-key-file' +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:19 node-01 k3s[15833]: time="2022-10-04T22:42:19Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available'5%,nodefs.available'5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=node-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=c94666fd-3d6a-40a4-81d7-7a14f6ec1117 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual) + + +**Result:** warn + +**Remediation:** +If using a Kubelet config file, edit the file to add the line rotateCertificates: true or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC containerd +``` + +**Audit Config:** + +```bash +/bin/cat /var/lib/kubelet/config.yaml +``` + +### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** warn + +**Remediation:** +If using a Kubelet config file, edit the file to set TLSCipherSuites: to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC containerd +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** warn + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +## 5.2 Pod Security Policies +### 5.2.1 Minimize the admission of privileged containers (Automated) + + +**Result:** warn + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that +the .spec.privileged field is omitted or set to false. + +### 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.hostPID field is omitted or set to false. + +### 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.hostIPC field is omitted or set to false. + +### 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.hostNetwork field is omitted or set to false. + +### 5.2.5 Minimize the admission of containers with allowPrivilegeEscalation (Automated) + + +**Result:** warn + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.allowPrivilegeEscalation field is omitted or set to false. + +### 5.2.6 Minimize the admission of root containers (Automated) + + +**Result:** warn + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of +UIDs not including 0. + +### 5.2.7 Minimize the admission of containers with the NET_RAW capability (Automated) + + +**Result:** warn + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.requiredDropCapabilities is set to include either NET_RAW or ALL. + +### 5.2.8 Minimize the admission of containers with added capabilities (Automated) + + +**Result:** warn + +**Remediation:** +Ensure that allowedCapabilities is not present in PSPs for the cluster unless +it is set to an empty array. + +### 5.2.9 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports Network Policies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have Network Policies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using secrets as files over secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +if possible, rewrite application code to read secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use security context to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply Security Context to Your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply security contexts to your pods. For a +suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. + diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-self-assessment-guide-with-cis-v1.23-benchmark.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-self-assessment-guide-with-cis-v1.23-benchmark.md new file mode 100644 index 00000000000..c34157f2054 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-v2.7-hardening-guides/k3s-self-assessment-guide-with-cis-v1.23-benchmark.md @@ -0,0 +1,3143 @@ +--- +title: K3s CIS v1.23 Benchmark - Self-Assessment Guide - Rancher v2.7 +--- + +### K3s CIS Kubernetes Benchmark v1.23 - K3s with Kubernetes v1.22 to v1.24 + +#### Overview + +This document is a companion to the [Rancher v2.7 K3s security hardening guide](k3s-hardening-guide-with-cis-benchmark.md). The hardening guide provides prescriptive guidance for hardening a production installation of Rancher with K3s provisioned clusters, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes Benchmark. + +This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark and Kubernetes: + +| Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version | +| ----------------------- | --------------- | --------------------- | ------------------- | +| Hardening Guide | Rancher v2.7 | CIS v1.23 | Kubernetes v1.22 to v1.24 | + +This document is to be used by Rancher operators, security teams, auditors and decision makers. + +For more information about each control, including detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). + +#### Testing controls methodology + +Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide. + +Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing. + +These are the possible results for each control: + +- **Pass** - The K3s cluster under test passed the audit outlined in the benchmark. +- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section will explain why this is so. +- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. + +This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary and will require you to adjust the "audit" commands to fit your scenario. + +:::note + +Only `automated` tests (previously called `scored`) are covered in this guide. + +::: + +### Controls + +--- + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 644 /etc/kubernetes/manifests/kube-apiserver.yaml + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 644 /etc/kubernetes/manifests/kube-controller-manager.yaml + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 644 /etc/kubernetes/manifests/kube-scheduler.yaml + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 644 /etc/kubernetes/manifests/etcd.yaml + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 644 'path/to/cni/files' + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root 'path/to/cni/files' + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 1.1.11 +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** Not Applicable + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf + +**Audit:** + +```bash +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi' +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 644 scheduler + +**Audit:** + +```bash +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 644 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler + +**Audit:** + +```bash +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 644 controllermanager + +**Audit:** + +```bash +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 644 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager + +**Audit:** + +```bash +stat -c %U:%G /var/lib/rancher/k3s/server/tls +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit:** + +```bash +find /var/lib/rancher/k3s/server/tls | xargs stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod -R 644 /etc/kubernetes/pki/*.crt + +**Audit:** + +```bash +stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod -R 600 /etc/kubernetes/pki/*.key + +**Audit:** + +```bash +stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth' +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file='filename' parameter. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep containerd | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' is present OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 3412 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id a91058354400a178f1a525cdd4fc12580dcc060fbed0f97ffde5ff60a31c027b -address /run/k3s/containerd/containerd.sock root 3539 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 85b4d9d4c7f307233fe6c745a9849dc33e568994c15c9885f4a2ddedd0d5517d -address /run/k3s/containerd/containerd.sock root 3543 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id eab240240838ca74d457b25a328a3d2bfeb49eb537ccd21154a10dd0802d41c1 -address /run/k3s/containerd/containerd.sock root 4704 1 0 22:32 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 64b394b7e18b3cf48ac88af1c7628bc27f56e6cf0e894c92f8297c28578755d9 -address /run/k3s/containerd/containerd.sock root 4788 1 0 22:32 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id c29a1995ae69aa8a4c9c97c77bbb5a962e705227125e3366a74abd1b2dbfcbba -address /run/k3s/containerd/containerd.sock root 6346 1 0 22:33 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 060a7677cbcc9b7867b4b45ff0def211c13181e5a02a721faa4bc76f263d3a80 -address /run/k3s/containerd/containerd.sock root 6429 1 0 22:33 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 48c40157ede9c920e0ffcd29faa5a7bb1462c59d80895970556bcee0ebc3bbcd -address /run/k3s/containerd/containerd.sock root 6784 1 0 22:33 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 52aae84e7daa038bccd8af4214af940ef56ed97cecd5f8c8523a33219e89d596 -address /run/k3s/containerd/containerd.sock root 8836 1 0 22:38 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 24a3c62f97d6f5f704eeaa7b1c1d1c4de7017fbd7d76fed3d6fb2b4537f63399 -address /run/k3s/containerd/containerd.sock root 10002 1 0 22:39 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id ec48cd072c05126378e96a2a31b826409482752f5d5ddac0706c750e3fdfad4f -address /run/k3s/containerd/containerd.sock root 15934 15833 6 22:42 ? 00:00:07 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 18034 1 0 22:43 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 879e7096e24701e14d837d9cc4aa699c24668082f573ae05371bf956707cc556 -address /run/k3s/containerd/containerd.sock root 19169 1 0 22:44 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 83979b8641b708495b9962188d35ca35db1ca83f0f4f4ff5bc317f83554fd414 -address /run/k3s/containerd/containerd.sock root 19287 1 0 22:44 ? 00:00:00 /var/lib/rancher/k3s/data/ec00304416df58a8da2a883b1b87ab882b199ef11c4e01b28f07d643c8067d91/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5d8b214c37c95475940d79aaa19e711fd67cf7ac0ba63a55d9796dd8be7b5de4 -address /run/k3s/containerd/containerd.sock +``` + +### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --kubelet-https parameter. + +### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate='path/to/client-certificate-file' +--kubelet-client-key='path/to/client-key-file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority' +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority='ca-string' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority' +``` + +**Expected Result**: + +```console +'--kubelet-certificate-authority' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file='path/to/configuration/file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'ServiceAccount' +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port' +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.18 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling' +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.19 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-path' +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxage' +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxbackup' +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxsize' +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.23 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'request-timeout' +``` + +**Expected Result**: + +```console +'--request-timeout' is not present OR '--request-timeout' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-lookup' +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.25 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file='filename' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-key-file' +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile='path/to/client-certificate-file' +--etcd-keyfile='path/to/client-key-file' + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 1.2.29 +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file='path/to/tls-certificate-file' +--tls-private-key-file='path/to/tls-key-file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2 +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259" +``` + +### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file='path/to/client-ca-file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile='path/to/ca-file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile' +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config='/path/to/EncryptionConfig/File' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config' +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.31 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +grep aescbc /path/to/encryption-config.json +``` + +### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites' +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold' +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling' +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials' +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file='filename' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file' +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file='path/to/file' + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file' +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'bind-address' +``` + +**Expected Result**: + +```console +'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true" +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259" +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address' +``` + +**Expected Result**: + +```console +'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259" +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file='/path/to/ca-file' +--key-file='/path/to/key-file' + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.1 +``` + +**Expected Result**: + +```console +'cert-file' is present AND 'key-file' is present +``` + +**Returned Value**: + +```console +cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.2 +``` + +**Expected Result**: + +```console +'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +client-cert-auth: true +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.3 +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +error: process ID list syntax error Usage: ps [options] Try 'ps --help 'simple|list|output|threads|misc|all'' or 'ps --help 's|l|o|t|m|a'' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the +master node and set the below parameters. +--peer-client-file='/path/to/peer-cert-file' +--peer-key-file='/path/to/peer-key-file' + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.4 +``` + +**Expected Result**: + +```console +'cert-file' is present AND 'key-file' is present +``` + +**Returned Value**: + +```console +cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.5 +``` + +**Expected Result**: + +```console +'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +client-cert-auth: true +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.6 +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +error: process ID list syntax error Usage: ps [options] Try 'ps --help 'simple|list|output|threads|misc|all'' or 'ps --help 's|l|o|t|m|a'' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the +master node and set the below parameter. +--trusted-ca-file='/path/to/ca-file' + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "--client-cert-auth=true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "--client-cert-auth=true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.7 +``` + +**Expected Result**: + +```console +'trusted-ca-file' is present +``` + +**Returned Value**: + +```console +trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Manual) + + +**Result:** warn + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file' +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 644 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig + +**Audit:** + +```bash +stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +``` + +**Expected Result**: + +```console +'permissions' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present +``` + +**Returned Value**: + +```console +644 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig + +**Audit:** + +```bash +stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +``` + +**Expected Result**: + +```console +'root:root' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 644 /var/lib/rancher/k3s/server/cred/admin.kubeconfig + +**Audit:** + +```bash +stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig +``` + +**Expected Result**: + +```console +'644' is equal to '644' +``` + +**Returned Value**: + +```console +644 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig + +**Audit:** + +```bash +stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 644 'filename' + +**Audit:** + +```bash +stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt +``` + +**Expected Result**: + +```console +'644' is equal to '644' OR '640' is present OR '600' is present OR '444' is present OR '440' is present OR '400' is present OR '000' is present +``` + +**Returned Value**: + +```console +644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root 'filename' + +**Audit:** + +```bash +stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 644 /var/lib/kubelet/config.yaml + +### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth' | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file='path/to/client-ca-file' +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver'| tail -n1 | grep 'client-ca-file' | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:18 node-01 k3s[15833]: time="2022-10-04T22:42:18Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:19 node-01 k3s[15833]: time="2022-10-04T22:42:19Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available'5%,nodefs.available'5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=node-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=c94666fd-3d6a-40a4-81d7-7a14f6ec1117 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** warn + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout' +``` + +### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--protect-kernel-defaults=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults' +``` + +**Expected Result**: + +```console +'--protect-kernel-defaults' is equal to 'true' +``` + +**Returned Value**: + +```console +Oct 04 22:42:19 node-01 k3s[15833]: time="2022-10-04T22:42:19Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available'5%,nodefs.available'5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=node-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=c94666fd-3d6a-40a4-81d7-7a14f6ec1117 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +Oct 04 22:42:19 node-01 k3s[15833]: time="2022-10-04T22:42:19Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available'5%,nodefs.available'5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=node-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=c94666fd-3d6a-40a4-81d7-7a14f6ec1117 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.8 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual) + + +**Result:** warn + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC containerd +``` + +### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file='path/to/tls-certificate-file' +--tls-private-key-file='path/to/tls-key-file' +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +Oct 04 22:42:19 node-01 k3s[15833]: time="2022-10-04T22:42:19Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available'5%,nodefs.available'5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=node-01 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=c94666fd-3d6a-40a4-81d7-7a14f6ec1117 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC containerd +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present +``` + +### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** warn + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC containerd +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** warn + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Automated) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Automated) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. + From 21f178faeec0cbbbb8793f37675ad5865dadca49 Mon Sep 17 00:00:00 2001 From: vickyhella Date: Mon, 27 Mar 2023 16:16:45 +0800 Subject: [PATCH 18/22] Update Chinese translation for v2.6 and v2.7 --- i18n/zh/code.json | 8 + .../deploy-workloads/workload-ingress.md | 2 +- .../enable-monitoring.md | 11 +- .../configure-azure-ad.md | 2 +- .../install-istio-on-rke2-cluster.md | 75 ++++++---- .../rancher-extensions.md | 2 +- .../fleet-gitops-at-scale.md | 6 +- .../monitoring-and-alerting.md | 28 ++-- .../use-new-nodes-in-an-infra-provider.md | 4 +- .../deploy-workloads/workload-ingress.md | 2 +- .../configure-azure-ad.md | 140 ++++++++++-------- .../authentication-config.md | 28 +++- 12 files changed, 188 insertions(+), 120 deletions(-) diff --git a/i18n/zh/code.json b/i18n/zh/code.json index 39bc0f03c4b..6dec4e5413e 100644 --- a/i18n/zh/code.json +++ b/i18n/zh/code.json @@ -392,5 +392,13 @@ "theme.tags.tagsPageTitle": { "message": "标签", "description": "The title of the tag list page" + }, + "theme.NavBar.navAriaLabel": { + "message": "主导航", + "description": "The ARIA label for the main navigation" + }, + "theme.docs.sidebar.navAriaLabel": { + "message": "文档侧边栏", + "description": "The ARIA label for the sidebar navigation" } } diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md index d6d32dbfa4f..d78e5a19324 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md @@ -18,7 +18,7 @@ title: 部署带有 Ingress 的工作负载 1. 单击**创建**。 1. 点击 **Deployment**。 1. 为工作负载设置**名称**。 -1. 在 **Docker 镜像**字段中,输入 `rancher/hello-world`。注意区分大小写。 +1. 在**容器镜像**字段中,输入 `rancher/hello-world`。注意区分大小写。 1. 点击**添加端口**并在**私有容器端口**字段中输入`80`。通过添加端口,你可以访问集群内外的应用。详情请参见 [Service](../../../pages-for-subheaders/workloads-and-pods.md#services)。 1. 单击**创建**。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring.md index 48402a1a277..b9515be0c10 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring.md @@ -10,10 +10,11 @@ title: 启用 Monitoring ## 要求 -- 确保在每个节点上允许端口 9796 上的流量,因为 Prometheus 将从这里抓取指标。 -- 确保你的集群满足资源要求。集群应至少有 1950Mi 可用内存、2700m CPU 和 50Gi 存储。要查看资源限制和请求的明细,请查看[此处](../../../reference-guides/monitoring-v2-configuration/helm-chart-options.md#配置资源限制和请求)。 -- 在使用 RancherOS 或 Flatcar Linux 节点的 RKE 集群上安装 monitoring 时,请将 etcd 节点证书目录更改为 `/opt/rke/etc/kubernetes/ssl`。 -- 如果集群是使用 RKE CLI 配置的,而且地址设置为主机名而不是 IP 地址,请在安装的 Values 配置步骤中将 `rkeEtcd.clients.useLocalhost` 设置为 `true`。YAML 片段如下所示: +- 在每个节点上允许端口 9796 上的流量。Prometheus 将从这些端口抓取指标。 + - 如果 [PushProx](../../../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md#pushprox) 被禁用(`ingressNginx.enabled` 设置为 `false`),或者你已经升级了安装了 Monitoring V1 的 Rancher 版本,你可能还需要为每个节点允许端口 10254 上的流量。 +- 确保你的集群满足资源要求。集群应至少有 1950Mi 可用内存、2700m CPU 和 50Gi 存储。有关资源限制和请求的详细信息,请参阅[配置资源限制和请求](../../../reference-guides/monitoring-v2-configuration/helm-chart-options.md#配置资源限制和请求)。 +- 在使用 RancherOS 或 Flatcar Linux 节点的 RKE 集群上安装 Monitoring 时,请将 etcd 节点证书目录更改为 `/opt/rke/etc/kubernetes/ssl`。 +- 如果集群是使用 RKE CLI 配置的,而且地址设置为主机名而不是 IP 地址,请在安装的 Values 配置步骤中将 `rkeEtcd.clients.useLocalhost` 设置为 `true`。例如: ```yaml rkeEtcd: @@ -27,7 +28,7 @@ rkeEtcd: ::: -# 设置资源限制和请求 +## 设置资源限制和请求 安装 `rancher-monitoring` 时可以配置资源请求和限制。要从 Rancher UI 配置 Prometheus 资源,请单击左上角的 **Apps > Monitoring**。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md index f422df8b3f0..aaee08033f8 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md @@ -218,7 +218,7 @@ Rancher 不会验证你授予 Azure 应用程序的权限。我们仅支持使 ::: -1. 按照[此处](#3-设置-rancher-所需的权限)所述更新 Azure AD 应用注册的权限。这很关键。 +1. 按照[此处](#3-设置-rancher-所需的权限)所述更新 Azure AD 应用注册的权限。这个步骤非常关键。 1. 登录到 Rancher。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md b/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md index aabe0e791b2..811c89b3ea8 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md @@ -1,5 +1,5 @@ --- -title: 在 RKE2 集群上安装 Istio 的其他步骤 +title: 在 RKE2 和 K3s 集群上安装 Istio 的其他步骤 --- 通过 **Apps** 页面安装或升级 Istio Helm Chart 时: @@ -8,30 +8,53 @@ title: 在 RKE2 集群上安装 Istio 的其他步骤 1. 你将看到配置 Istio Helm Chart 的选项。在**组件**选项卡上,选中**启用 CNI** 旁边的框。 1. 添加一个自定义覆盖文件,该文件指定 `cniBinDir` 和 `cniConfDir`。有关这些选项的更多信息,请参阅 [Istio 文档](https://istio.io/latest/docs/setup/additional-setup/cni/#helm-chart-parameters)。下方是一个示例: - ```yaml - apiVersion: install.istio.io/v1alpha1 - kind: IstioOperator - spec: - components: - cni: - enabled: true - k8s: - overlays: - - apiVersion: "apps/v1" - kind: "DaemonSet" - name: "istio-cni-node" - patches: - - path: spec.template.spec.containers.[name:install-cni].securityContext.privileged - value: true - values: - cni: - image: rancher/mirrored-istio-install-cni:1.9.3 - excludeNamespaces: - - istio-system - - kube-system - logLevel: info - cniBinDir: /opt/cni/bin - cniConfDir: /etc/cni/net.d - ``` + + + +```yaml +apiVersion: install.istio.io/v1alpha1 +kind: IstioOperator +spec: + components: + cni: + enabled: true + k8s: + overlays: + - apiVersion: "apps/v1" + kind: "DaemonSet" + name: "istio-cni-node" + patches: + - path: spec.template.spec.containers.[name:install-cni].securityContext.privileged + value: true + values: + cni: + cniBinDir: /opt/cni/bin + cniConfDir: /etc/cni/net.d +``` + + + +```yaml +apiVersion: install.istio.io/v1alpha1 +kind: IstioOperator +spec: + components: + cni: + enabled: true + k8s: + overlays: + - apiVersion: "apps/v1" + kind: "DaemonSet" + name: "istio-cni-node" + patches: + - path: spec.template.spec.containers.[name:install-cni].securityContext.privileged + value: true + values: + cni: + cniBinDir: /var/lib/rancher/k3s/data/current/bin + cniConfDir: /var/lib/rancher/k3s/agent/etc/cni/net.d +``` + + **结果**:现在你应该可以根据需要使用 Istio,包括 Sidecar 注入和通过 Kiali 进行监控。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/rancher-extensions.md b/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/rancher-extensions.md index 6a52bd81edb..7d8bec8a128 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/rancher-extensions.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/rancher-extensions.md @@ -81,4 +81,4 @@ Rancher v2.7.0 引入了**扩展(Extension)**的新功能。扩展允许用 ## 开发扩展 -要了解如何开发扩展,请参阅 [UI DevKit 文档](https://rancher.github.io/dashboard/plugins/plugins-getting-started)。 +要了解如何开发你自己的扩展,请参阅官方[入门指南](https://rancher.github.io/dashboard/extensions/extensions-getting-started)。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/fleet-gitops-at-scale.md b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/fleet-gitops-at-scale.md index e6770a85bca..3a47d95571b 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/fleet-gitops-at-scale.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/fleet-gitops-at-scale.md @@ -2,7 +2,7 @@ title: Fleet - 大规模的 GitOps --- -Fleet 是大规模的 GitOps。你可以使用 Fleet 管理多达一百万个集群。此外,它非常轻量,因此也非常适用于[单个集群](https://fleet.rancher.io/single-cluster-install/)。但是,它在[大规模](https://fleet.rancher.io/multi-cluster-install/)场景下的功能更加强大。大规模指的是大量集群、大量部署或大量团队。 +Fleet 是大规模的 GitOps。你可以使用 Fleet 管理多达一百万个集群。此外,它非常轻量,因此也非常适用于[单个集群](https://fleet.rancher.io/tut-deployment#single-cluster-examples)。但是,它在[大规模](https://fleet.rancher.io/tut-deployment#multi-cluster-examples)场景下的功能更加强大。大规模指的是大量集群、大量部署或大量团队。 Fleet 是一个独立于 Rancher 的项目,你可以使用 Helm 将它安装在任何 Kubernetes 集群上。 @@ -31,7 +31,7 @@ Fleet 预装在 Rancher 中,可以通过 Rancher UI 中的**持续交付**选 1. 单击左侧导航栏上的 **Git 仓库**将 git 仓库部署到当前工作空间中的集群中。 -1. 选择你的 [git 仓库](https://fleet.rancher.io/gitrepo-add/)和[目标集群/集群组](https://fleet.rancher.io/gitrepo-structure/)。你还可以单击左侧导航栏中的**集群组**在 UI 中创建集群组。 +1. 选择你的 [git 仓库](https://fleet.rancher.io/gitrepo-add/)和[目标集群/集群组](https://fleet.rancher.io/gitrepo-targets/)。你还可以单击左侧导航栏中的**集群组**在 UI 中创建集群组。 1. 部署 git 仓库后,你可以通过 Rancher UI 监控应用。 @@ -41,7 +41,7 @@ Fleet 预装在 Rancher 中,可以通过 Rancher UI 中的**持续交付**选 ## GitHub 仓库 -你可以单击此处获取 [Fleet Helm Chart](https://github.com/rancher/fleet/releases/tag/v0.3.10)。 +你可以单击此处获取 [Fleet Helm Chart](https://github.com/rancher/fleet/releases)。 ## 在代理后使用 Fleet diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/monitoring-and-alerting.md b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/monitoring-and-alerting.md index 8bea3e29ed7..71ac5e0ec93 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/monitoring-and-alerting.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/monitoring-and-alerting.md @@ -14,18 +14,16 @@ Prometheus 支持查看 Rancher 和 Kubernetes 对象的指标。通过使用时 在 Rancher v2.5 中引入的 `rancher-monitoring` operator 由 [Prometheus](https://prometheus.io/)、[Grafana](https://grafana.com/grafana/)、[Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator) 和 [Prometheus adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter) 提供支持。 -Monitoring 应用允许你: +Monitoring 应用: -- 监控集群节点、Kubernetes 组件和软件部署的状态和进程 -- 根据 Prometheus 收集的指标定义告警 -- 创建自定义 Grafana 仪表板 -- 使用 Prometheus Alertmanager 通过电子邮件、Slack、PagerDuty 等配置告警通知 -- 根据 Prometheus 收集的指标,将预先计算的、经常需要的,或计算成本高的表达式定义为新的时间序列 -- 通过 Prometheus Adapter,将从 Prometheus 收集的指标公开给 Kubernetes Custom Metrics API,以便在 HPA 中使用 +- 监控集群节点、Kubernetes 组件和软件部署的状态和进程。 +- 根据 Prometheus 收集的指标定义告警。 +- 创建自定义 Grafana 仪表板。 +- 使用 Prometheus Alertmanager 通过电子邮件、Slack、PagerDuty 等配置告警通知。 +- 根据 Prometheus 收集的指标,将预先计算的、经常需要的,或计算成本高的表达式定义为新的时间序列。 +- 通过 Prometheus Adapter,将从 Prometheus 收集的指标公开给 Kubernetes Custom Metrics API,以便在 HPA 中使用。 -## Monitoring 的工作原理 - -有关 monitoring 组件如何协同工作的说明,请参阅[此页面](../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md)。 +有关监控组件如何协同工作的说明,请参阅 [Monitoring 工作原理](../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md)。 ## 默认组件和部署 @@ -65,7 +63,7 @@ Monitoring 应用会默认部署一些告警。要查看默认告警,请转到 ### 在 Rancher 中配置 Monitoring 资源 -> 此处的配置参考假设你已经熟悉 monitoring 组件的协同工作方式。如需更多信息,请参阅 [monitoring 的工作原理](../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md)。 +此处的配置参考假设你已经熟悉 monitoring 组件的协同工作方式。如需更多信息,请参阅 [monitoring 的工作原理](../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md)。 - [ServiceMonitor 和 PodMonitor](../reference-guides/monitoring-v2-configuration/servicemonitors-and-podmonitors.md) - [接收器](../reference-guides/monitoring-v2-configuration/receivers.md) @@ -76,7 +74,7 @@ Monitoring 应用会默认部署一些告警。要查看默认告警,请转到 ### 配置 Helm Chart 选项 -有关 `rancher-monitoring` Chart 选项的更多信息,包括设置资源限制和请求的选项,请参阅[此页面](../reference-guides/monitoring-v2-configuration/helm-chart-options.md)。 +有关 `rancher-monitoring` Chart 选项的更多信息,包括设置资源限制和请求的选项,请参阅 [Helm Chart 选项](../reference-guides/monitoring-v2-configuration/helm-chart-options.md)。 ## Windows 集群支持 @@ -84,11 +82,11 @@ Monitoring 应用会默认部署一些告警。要查看默认告警,请转到 为了能够为 Windows 完全部署 Monitoring V2,你的所有 Windows 主机都必须至少具有 v0.1.0 的 [wins](https://github.com/rancher/wins) 版本。 -有关如何在现有 Windows 主机上升级 wins 版本的更多详细信息,请参阅 [Windows 集群对 Monitoring V2 的支持](../integrations-in-rancher/monitoring-and-alerting/windows-support.md)。 +有关如何在现有 Windows 主机上升级 wins 版本的更多信息,请参阅 [Windows 集群对 Monitoring V2 的支持](../integrations-in-rancher/monitoring-and-alerting/windows-support.md)。 ## 已知问题 -有一个[已知问题](https://github.com/rancher/rancher/issues/28787#issuecomment-693611821),即 K3s 集群需要更多的默认内存。如果你在 K3s 集群上启用 monitoring,我们建议将 `prometheus.prometheusSpec.resources.memory.limit` 设置为 2500 Mi,并将 `prometheus.prometheusSpec.resources.memory.request` 设置为 1750 Mi。 +有一个[已知问题](https://github.com/rancher/rancher/issues/28787#issuecomment-693611821),即 K3s 集群需要的内存超过分配的默认内存。如果你在 K3s 集群上启用 Monitoring,将 `prometheus.prometheusSpec.resources.memory.limit` 设置为 2500 Mi,并将 `prometheus.prometheusSpec.resources.memory.request` 设置为 1750 Mi。 -有关调试高内存用量的提示,请参阅[此页面](../how-to-guides/advanced-user-guides/monitoring-alerting-guides/debug-high-memory-usage.md)。 +如需获取意见和建议,请参阅[调试高内存使用情况](../how-to-guides/advanced-user-guides/monitoring-alerting-guides/debug-high-memory-usage.md)。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/use-new-nodes-in-an-infra-provider.md b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/use-new-nodes-in-an-infra-provider.md index 9e990b85cab..bd0ce6115e3 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/use-new-nodes-in-an-infra-provider.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/pages-for-subheaders/use-new-nodes-in-an-infra-provider.md @@ -72,7 +72,7 @@ title: 在云厂商的新节点上启动 Kubernetes #### 节点池污点 -如果你没有在节点模板上定义[污点](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/),则可以为每个节点池添加污点。相比在节点模板上添加污点,在节点池上添加污点的好处在于,你可以替换节点模板,而不必担心污点是否在节点模板中。 +如果你没有在节点模板上定义[污点](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/),则可以为每个节点池添加污点。将污点添加到节点池的好处是你可以更改节点模板,而不需要先确保污点存在于新模板中。 每个污点都将自动添加到节点池中已创建的节点。因此,如果你在已有节点的节点池中添加污点,污点不会应用到已有的节点,但是添加到该节点池中的新节点都将获得该污点。 @@ -149,4 +149,4 @@ RKE2 CLI 公开了 `server` 和 `agent` 两个角色,它们分别代表 Kubern - 至少拥有三个角色为 etcd 的节点,来确保失去一个节点时仍能存活。 - 至少两个节点具有 controlplane 角色,以实现主组件高可用性。 -- 至少两个具有 worker 角色的节点,用于在节点故障时重新安排工作负载。 \ No newline at end of file +- 至少两个具有 worker 角色的节点,用于在节点故障时重新安排工作负载。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md index 67e9a3a3de1..86f8017f09e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/getting-started/quick-start-guides/deploy-workloads/workload-ingress.md @@ -18,7 +18,7 @@ title: 部署带有 Ingress 的工作负载 1. 单击**创建**。 1. 点击 **Deployment**。 1. 为工作负载设置**名称**。 -1. 在 **Docker 镜像**字段中,输入 `rancher/hello-world`。注意区分大小写。 +1. 在**容器镜像**字段中,输入 `rancher/hello-world`。注意区分大小写。 1. 在 `Service Type` 点击 **Add Port** 和 `Cluster IP`,并在 **Private Container Port** 字段中输入`80`。你可以将 `Name` 留空或指定名称。通过添加端口,你可以访问集群内外的应用。有关详细信息,请参阅 [Service](../../../pages-for-subheaders/workloads-and-pods.md#services)。 1. 单击**创建**。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md index 7056e3b476d..1ee06e2a53a 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md @@ -3,12 +3,14 @@ title: 配置 Azure AD --- - + ## Microsoft Graph API Microsoft Graph API 现在是设置 Azure AD 的流程。下文将帮助[新用户](#新用户设置)使用新实例来配置 Azure AD,并帮助现有 Azure 应用所有者[迁移到新流程](#从-azure-ad-graph-api-迁移到-microsoft-graph-api)。 +Rancher 中的 Microsoft Graph API 流程正在不断发展。建议你使用最新的 2.6 补丁版本,该版本仍在积极开发中,并将持续获得新功能和改进。 + ### 新用户设置 如果你在 Azure 中托管了一个 Active Directory(AD)实例,你可以将 Rancher 配置为允许你的用户使用 AD 账号登录。你需要在 Azure 和 Rancher 中进行 Azure AD 外部身份验证。 @@ -26,7 +28,7 @@ Microsoft Graph API 现在是设置 Azure AD 的流程。下文将帮助[新用 :::tip -在开始之前,我们建议你创建一个空文本文件。你可以将 Azure 相关的值复制到该文件,然后再粘贴到 Rancher 中。 +在开始之前,打开两个浏览器选项卡:一个用于 Rancher,另一个用于 Azure 门户。这样,你可以将门户的配置值复制并粘贴到 Rancher 中。 ::: @@ -39,9 +41,7 @@ Microsoft Graph API 现在是设置 Azure AD 的流程。下文将帮助[新用 1. 使用搜索功能打开 **App registrations** 服务。 - ![Open App Registrations](/img/search-app-registrations.png) - -1. 单击 **New registrations** 并完成 **Create** 表单。 +1. 点击 **New registration** 并填写表单。 ![New App Registration](/img/new-app-registration.png) @@ -80,20 +80,17 @@ Microsoft Graph API 现在是设置 Azure AD 的流程。下文将帮助[新用 ![Open Rancher Registration](/img/open-rancher-app-reg.png) -1. 在左侧的导航窗格中,单击 **Certificates and Secrets**。 +1. 在导航窗格中,单击 **Certificates & secrets**。 1. 单击 **New client secret**。 ![创建新的客户端密文](/img/new-client-secret.png) 1. 输入 **Description**(例如 `Rancher`)。 -1. 从 **Expires** 下的选项中选择密钥的持续时间。此下拉菜单设置的是密钥的到期日期。日期越短则越安全,但是在到期后你需要创建新密钥。 +1. 从 **Expires** 下的选项中选择持续时间。此下拉菜单设置的是密钥的到期日期。日期越短则越安全,但需要你更频繁地创建新密钥。 + 请注意,如果检测到应用程序 Secret 已过期,用户将无法登录 Rancher。为避免此问题,请在 Azure 中轮换 Secret 并在过期前在 Rancher 中更新它。 1. 单击 **Add**(无需输入值,保存后会自动填充)。 -1. 将键值复制保存到[空文本文件](#tip)。 - - 稍后你将在 Rancher UI 中输入此密钥作为你的 **Application Secret**。 - - 你将无法在 Azure UI 中再次访问该键值。 +1. 稍后你将在 Rancher UI 中输入此密钥作为你的 **Application Secret**。由于你将无法在 Azure UI 中再次访问键值,因此请在其余设置过程中保持打开此窗口。 #### 3. 设置 Rancher 所需的权限 @@ -101,63 +98,75 @@ Microsoft Graph API 现在是设置 Azure AD 的流程。下文将帮助[新用 :::caution -请确保你设置了 Application 和 NOT Delegated 的权限类型。否则,你可能无法登录 Azure AD。禁用/重新启用 Azure AD 无法解决此问题,你需要等待一小时或手动删除缓存值。 +确保你设置了 Application 权限,而*不是* Delegated 权限。否则,你将无法登录 Azure AD。 ::: -1. 从左侧的导航窗格中,选择 **API permissions**。 - - ![Open Required Permissions](/img/select-req-permissions.png) +1. 在导航窗格中,选择 **API permissions**。 1. 单击 **Add a permission**。 -1. 在 **Microsoft Graph** 中,选择以下 **Application Permissions**: - - `Group.Read.All` - - `User.Read.All` +1. 从 Microsoft Graph API 中,选择以下 **Application Permissions**: `Directory.Read.All`。 - ![选择 API 权限](/img/api-permissions-2-6.png) + ![选择 API 权限](/img/api-permissions.png) -1. 返回左侧导航栏中的 **API permissions**。在那里,单击 **Grant admin consent**。然后单击 **Yes**。 +:::note - :::note +在 Rancher 2.6.7-2.6.10 版本中,你需要使用 `User.Read.All` 和 `Group.Read.All` 来获取权限。在 v2.6.11 中已更改为允许范围较小的权限(例如 `Directory.Read.All`)。 - 你必须以 Azure 管理员身份登录才能保存你的权限设置。 +::: - ::: +1. 返回导航栏中的 **API permissions**。在那里,单击 **Grant admin consent**。然后单击 **Yes**。该应用程序的权限应如下所示: + +![Open Required Permissions](/img/select-req-permissions.png) + +:::note + +Rancher 不会验证你授予 Azure 应用程序的权限。你可以自由使用任何你所需的权限,只要这些权限允许 Rancher 使用 AD 用户和组。 + +具体来说,Rancher 需要允许以下操作的权限: +- 获取一个用户。 +- 列出所有用户。 +- 列出给定用户所属的组。 +- 获取一个组。 +- 列出所有组。 + +Rancher 执行这些操作来登录用户或搜索用户/组。请记住,权限必须是 `Application` 类型。 + +下面是几个满足 Rancher 需求的权限组合示例: +- `Directory.Read.All` +- `User.Read.All` 和 `GroupMember.Read.All` +- `User.Read.All` 和 `Group.Read.All` + +::: #### 4. 复制 Azure 应用数据 +![Application ID](/img/app-configuration.png) + 1. 获取你的 Rancher **租户 ID**。 1. 使用搜索打开 **App registrations**。 - ![Open App Registrations](/img/search-app-registrations.png) - 1. 找到你为 Rancher 创建的项。 - 1. 复制 **Directory ID** 并粘贴到你的[文本文件](#tip)。 - - ![Tenant ID](/img/tenant-id.png) - - - 你将把这个值作为 **Tenant ID** 粘贴到 Rancher。 + 1. 复制 **Directory ID** 并将其作为 **Tenant ID** 粘贴到 Rancher 中。 1. 获取你的 Rancher **Application (Client) ID**。 - 2.1. 使用搜索打开 **App registrations**(如果还没有的话)。 + 1. 如果你还未在该位置,请使用搜索打开 **App registrations**。 - 2.2. 在 **Overview**中,找到你为 Rancher 创建的条目。 + 1. 在 **Overview**中,找到你为 Rancher 创建的条目。 - 2.3. 复制 **Application (Client) ID** 并将其粘贴到你的[文本文件](#tip)。 + 1. 复制 **Application (Client) ID** 并将其作为 **Application ID** 粘贴到 Rancher 中。 - ![Application ID](/img/application-client-id.png) - -1. 你的端点选项通常是 [Standard](#global) 和 [China](#china)。使用这些选项,你只需要输入 **Tenant ID**、**Application ID** 和 **Application Secret**(Rancher 将负责其余的工作)。 +1. 你的端点选项通常是 [Standard](#global) 或 [China](#china)。对于这两个选项,你只需要输入 **Tenant ID**、**Application ID** 和 **Application Secret**。 ![标准端点选项](/img/tenant-application-id-secret.png) **对于自定义端点**: -**警告**:Rancher 不支持也不完全测试自定义端点。 +**警告**:Rancher 未测试也未完全支持自定义端点。 你还需要手动输入 Graph、Token 和 Auth Endpoints。 @@ -165,7 +174,7 @@ Microsoft Graph API 现在是设置 Azure AD 的流程。下文将帮助[新用 ![点击端点](/img/endpoints.png) -- 将以下端点复制并粘贴到你的[文本文件](#tip)中(这些值将是你的 Rancher 端点值):确保复制端点的 v1 版本。 +- 以下端点将是你的 Rancher 端点值。请使用这些端点的 v1 版本。 - **Microsoft Graph API endpoint**(Graph 端点) - **OAuth 2.0 token endpoint (v1)**(Token 端点) - **OAuth 2.0 authorization endpoint (v1)** (Auth 端点) @@ -222,14 +231,19 @@ Microsoft Graph API 现在是设置 Azure AD 的流程。下文将帮助[新用 ### 从 Azure AD Graph API 迁移到 Microsoft Graph API -由于 [Azure AD Graph API](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview) 已于 2022 年 6 月弃用并将于 2022 年底停用,因此用户应更新其 Azure AD 应用程序以在 Rancher 中使用新的 [Microsoft Graph API](https://docs.microsoft.com/en-us/graph/use-the-api)。 +由于 [Azure AD Graph API](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview) 已弃用并计划于 2023 年 6 月停用,管理员应更新他们的 Azure AD 应用程序以在 Rancher 中使用 [Microsoft Graph API](https://docs.microsoft.com/en-us/graph/use-the-api)。 +你需要在端点弃用之前完成操作。 +如果在停用后 Rancher 仍配置为使用 Azure AD Graph API,用户可能无法使用 Azure AD 登录 Rancher。 #### 在 Rancher UI 中更新端点 -> **重要提示**:管理员应该在他们提交下面第 4 步中的端点迁移之前创建一个[备份](../../../new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md)。 +:::caution -1. 按照[此处](#3-设置-rancher-所需的权限)所述更新 Azure AD 应用注册的权限。 - (**重要**)。 +管理员需要在迁移下述端点之前创建一个 [Rancher 备份](../../../new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md)。 + +::: + +1. [更新](#3-设置-rancher-所需的权限) Azure AD 应用程序注册的权限。这个步骤非常关键。 1. 登录到 Rancher。 @@ -261,17 +275,20 @@ Microsoft Graph API 现在是设置 Azure AD 的流程。下文将帮助[新用 1. 如果 Azure 应用程序所有者想要轮换应用程序密钥,他们也需要在 Rancher 中进行轮换(因为在 Azure 中更改应用程序密钥时,Rancher 不会自动更新应用程序密钥)。在 Rancher 中,它存储在名为 `azureadconfig-applicationsecret` 的 Kubernetes 密文中,该密文位于 `cattle-global-data` 命名空间中。 -1. **注意**:如果管理员使用现有 Azure AD 设置升级到 Rancher v2.6.7 并选择了禁用身份验证提供程序,他们将无法恢复以前的设置,也无法设置使用旧流程重新设置 Azure AD。然后,管理员需要使用新的身份验证流程重新注册。Rancher 现在使用了新的 Graph API,因此,用户需要在 Azure 门户中设置[适当的权限](#3-设置-rancher-所需的权限)。 +:::caution + +如果你使用现有的 Azure AD 设置升级到 Rancher v2.6.7+,并选择了禁用认证提供程序,你将无法恢复以前的设置。你也无法使用旧流程设置 Azure AD。你需要使用新的认证流程重新注册。由于 Rancher 现在使用 Graph API,因此用户需要[在 Azure 门户中设置适当的权限](#3-设置-rancher-所需的权限)。 + +::: #### Global: -Rancher 字段 | 已弃用端点 +| Rancher 字段 | 已弃用的端点 | ---------------- | ------------------------------------------------------------- -Auth 端点 | https://login.microsoftonline.com/{tenantID}/oauth2/authorize -端点 | https://login.microsoftonline.com/ -Graph 端点 | https://graph.windows.net/ -Token 端点 | https://login.microsoftonline.com/{tenantID}/oauth2/token ---- +| Auth 端点 | https://login.microsoftonline.com/{tenantID}/oauth2/authorize | +| 端点 | https://login.microsoftonline.com/ | +| Graph 端点 | https://graph.windows.net/ | +| Token 端点 | https://login.microsoftonline.com/{tenantID}/oauth2/token | | Rancher 字段 | 新端点 | ---------------- | ------------------------------------------------------------------ @@ -282,13 +299,12 @@ Token 端点 | https://login.microsoftonline.com/{tenantID}/oauth2/token #### 中国: -Rancher 字段 | 已弃用端点 +| Rancher 字段 | 已弃用的端点 | ---------------- | ---------------------------------------------------------- -Auth 端点 | https://login.chinacloudapi.cn/{tenantID}/oauth2/authorize -端点 | https://login.chinacloudapi.cn/ -Graph 端点 | https://graph.chinacloudapi.cn/ -Token 端点 | https://login.chinacloudapi.cn/{tenantID}/oauth2/token ---- +| Auth 端点 | https://login.chinacloudapi.cn/{tenantID}/oauth2/authorize | +| 端点 | https://login.chinacloudapi.cn/ | +| Graph 端点 | https://graph.chinacloudapi.cn/ | +| Token 端点 | https://login.chinacloudapi.cn/{tenantID}/oauth2/token | | Rancher 字段 | 新端点 | ---------------- | ------------------------------------------------------------------------- @@ -301,19 +317,19 @@ Token 端点 | https://login.chinacloudapi.cn/{tenantID}/oauth2/token -## Azure AD Graph API +## 已弃用的 Azure AD Graph API > **重要提示**: > -> - [Azure AD Graph API](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview) 已于 2022 年 6 月弃用,并将于 2022 年底停用。我们将更新我们的文档,以便在停用时向社区提供建议。Rancher 现在使用 [Microsoft Graph API](https://docs.microsoft.com/en-us/graph/use-the-api) 来将 Azure AD 设置为外部身份验证提供程序。 +> - [Azure AD Graph API](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview) 已被弃用,Microsoft 将在 2023 年 6 月 30 日后随时停用它且不会另行通知。我们将更新我们的文档,以便在停用时向社区提供建议。Rancher 现在使用 [Microsoft Graph API](https://docs.microsoft.com/en-us/graph/use-the-api) 来将 Azure AD 设置为外部身份验证提供程序。 > > -> - 对于想要迁移的新用户或现有用户,请参阅 Rancher v2.6.7 选项卡。 +> - 如果你是新用户或希望进行迁移,请参阅新的流程说明: Rancher v2.6.7+。 > > -> - 对于在 Azure AD Graph API 停用后不希望升级到 v2.6.7 的现有用户,他们需要: -> - 使用内置的 Rancher 身份验证,或者 -> - 使用另一个第三方身份验证系统并在 Rancher 中进行设置。请参阅[身份验证文档](../../../../pages-for-subheaders/authentication-config.md),了解如何配置其他开放式身份验证提供程序。 +> - 如果你不想在 Azure AD Graph API 停用后升级到 v2.6.7+,你需要: +> - 使用内置的 Rancher 身份认证,或者 +> - 使用另一个第三方身份认证系统并在 Rancher 中进行设置。请参阅[身份验证文档](../../../../pages-for-subheaders/authentication-config.md),了解如何配置其他开放式身份验证提供程序。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/pages-for-subheaders/authentication-config.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/pages-for-subheaders/authentication-config.md index 014aae64477..d36b0c66add 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/pages-for-subheaders/authentication-config.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/pages-for-subheaders/authentication-config.md @@ -26,9 +26,7 @@ Rancher 身份验证代理支持与以下外部身份验证服务集成: | [Google OAuth](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth.md) | | [Shibboleth](configure-shibboleth-saml.md) | -
- -同时,Rancher 也提供了[本地验证](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/create-local-users.md)。 +同时,Rancher 也提供了[本地身份验证](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/create-local-users.md)。 大多数情况下,应该使用外部身份验证服务,而不是本地身份验证,因为外部身份验证允许对用户进行集中管理。但是你可能需要一些本地身份验证用户,以便在特定的情况下(例如在外部身份验证系统不可用或正在进行维护时)管理 Rancher。 @@ -109,3 +107,27 @@ Rancher 依赖用户和组来决定允许登录到 Rancher 的用户,以及他 如果你需要重新配置或禁用以前设置的提供程序然后再重新启用它,请确保进行此操作的用户使用外部用户身份登录 Rancher,而不是本地管理员。 ::: + +## 禁用认证提供程序 + +禁用身份认证提供程序时,Rancher 会删除与其关联的所有资源,例如: +- 密文 +- 全局角色绑定 +- 集群角色模板绑定 +- 项目角色模板绑定 +- 与提供商关联的外部用户,这些用户从未以本地用户身份登录到 Rancher + +由于此操作可能会导致许多资源丢失,因此你可能希望在提供程序上添加保护措施。 +为确保在禁用身份认证提供程序时不会运行此清理,请向相应的身份认证配置添加特殊注释。 + +例如,要为 Azure AD 提供程序添加安全措施,请注释 `azuread` authconfig 对象: + +`kubectl annotate --overwrite authconfig azuread management.cattle.io/auth-provider-cleanup='user-locked'` + +在你将注释设置为 `unlocked` 之前,Rancher 不会执行清理。 + +### 手动运行资源清理 + +即使在你配置了另一个身份认证提供程序,Rancher 也可能会保留 local 集群中已禁用的身份认证提供程序配置的资源。例如,如果你使用 Provider A,然后禁用了它并开始使用 Provider B,当你升级到新版本的 Rancher 时,你可以手动触发对 Provider A 配置的资源的清理。 + +要为已禁用的身份认证提供程序手动触发清理,请将带有 `unlocked` 值的 `management.cattle.io/auth-provider-cleanup` 注释添加到 auth 配置中。 From 40bfe603fa8175ba3bcd3ac69d07bc3cbd5a78f4 Mon Sep 17 00:00:00 2001 From: vickyhella Date: Tue, 28 Mar 2023 18:25:56 +0800 Subject: [PATCH 19/22] Fix typo --- .../authentication-config/configure-azure-ad.md | 2 +- .../authentication-config/configure-azure-ad.md | 2 +- .../authentication-config/configure-azure-ad.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md b/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md index b8899f6c7fa..5bbd17f92ab 100644 --- a/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md +++ b/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md @@ -253,7 +253,7 @@ If you need to roll back your migration, please note the following: 1. Azure app owners who want to rotate the Application Secret will need to also rotate it in Rancher as Rancher does not automatically update the Application Secret when it is changed in Azure. In Rancher, note that it is stored in a Kubernetes secret called `azureadconfig-applicationsecret` which is in the `cattle-global-data` namespace. -1. **Caution:** If admins upgrade to Rancher v2.7.0+ with an existing Azure AD setup and choose to disable the auth provider, they won't be able to restore the previous setup and also will not be able to set up Azure AD anew using the old flow. Admins will then need to register again with the new auth flow. Rancher now uses the new Graph API and, therefore, users need set up the [proper permissions in the Azure portal](#3-set-required-permissions-for-rancher). +1. **Caution:** If admins upgrade to Rancher v2.7.0+ with an existing Azure AD setup and choose to disable the auth provider, they won't be able to restore the previous setup and also will not be able to set up Azure AD using the old flow. Admins will then need to register again with the new auth flow. Rancher now uses the new Graph API and, therefore, users need set up the [proper permissions in the Azure portal](#3-set-required-permissions-for-rancher). #### Global: diff --git a/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-azure-ad.md b/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-azure-ad.md index 827de657915..5b4fff3d362 100644 --- a/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-azure-ad.md +++ b/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-azure-ad.md @@ -240,7 +240,7 @@ If you need to roll back your migration, please note the following: 1. Azure app owners who want to rotate the Application Secret will need to also rotate it in Rancher as Rancher does not automatically update the Application Secret when it is changed in Azure. In Rancher, note that it is stored in a Kubernetes secret called `azureadconfig-applicationsecret` which is in the `cattle-global-data` namespace. -1. **Caution:** If admins upgrade to Rancher v2.5.16 with an existing Azure AD setup and choose to disable the auth provider, they won't be able to restore the previous setup and also will not be able to set up Azure AD anew using the old flow. Admins will then need to register again with the new auth flow. Rancher now uses the new Graph API and, therefore, users need set up the [proper permissions in the Azure portal](#3-set-required-permissions-for-rancher). +1. **Caution:** If admins upgrade to Rancher v2.5.16 with an existing Azure AD setup and choose to disable the auth provider, they won't be able to restore the previous setup and also will not be able to set up Azure AD using the old flow. Admins will then need to register again with the new auth flow. Rancher now uses the new Graph API and, therefore, users need set up the [proper permissions in the Azure portal](#3-set-required-permissions-for-rancher). #### Global: diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md index 274e1eb485e..57bb84548d5 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad.md @@ -277,7 +277,7 @@ If you need to roll back your migration, please note the following: :::caution -If you upgrade to Rancher v2.6.7+ with an existing Azure AD setup, and choose to disable the auth provider, you won't be able to restore the previous setup. You also won't be able to set up Azure AD anew using the old flow. You'll need to re-register with the new auth flow. Since Rancher now uses the Graph API, users need set up the [proper permissions in the Azure portal](#3-set-required-permissions-for-rancher). +If you upgrade to Rancher v2.6.7+ with an existing Azure AD setup, and choose to disable the auth provider, you won't be able to restore the previous setup. You also won't be able to set up Azure AD using the old flow. You'll need to re-register with the new auth flow. Since Rancher now uses the Graph API, users need set up the [proper permissions in the Azure portal](#3-set-required-permissions-for-rancher). ::: From fba622e38cedaad7743b80aacfede2bcf5d0864b Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Mon, 3 Apr 2023 11:03:15 -0400 Subject: [PATCH 20/22] #428 clarify that global default registry doesn't work when using namespaced registry on downstream RKE2 (#444) * #428 clarify that global default registry doesn't work when using namespaced registry on downstream RKE2 * build choked on angle brackets * added instructions for rke2 w certain namespaced private registries * Apply suggestions from code review Co-authored-by: Billy Tat --------- Co-authored-by: Billy Tat --- .../global-default-private-registry.md | 54 +++++++++++++------ 1 file changed, 37 insertions(+), 17 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md b/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md index 3130bbf15c0..7e8f4886777 100644 --- a/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md +++ b/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md @@ -2,36 +2,56 @@ title: Configuring a Global Default Private Registry --- -You might want to use a private container image registry to share your custom base images within your organization. With a private registry, you can keep a private, consistent, and centralized source of truth for the container images that are used in your clusters. +:::note +This page describes how to configure a global default private registry from the Rancher UI, after Rancher is already installed. -There are two main ways to set up private registries in Rancher: by setting up the global default registry through the **Settings** tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials. +For instructions on how to set up a private registry during Rancher installation, refer to the [air-gapped installation guide](../../../pages-for-subheaders/air-gapped-helm-cli-install.md). -This section is about configuring the global default private registry, and focuses on how to configure the registry from the Rancher UI after Rancher is installed. +::: -For instructions on setting up a private registry with command line options during the installation of Rancher, refer to the [air-gapped installation guide](../../../pages-for-subheaders/air-gapped-helm-cli-install.md). +A private registry is a private, consistent, and centralized source of truth for the container images in your clusters. You can use a private container image registry to share custom base images within your organization. -If your private registry requires credentials, it cannot be used as the default registry. There is no global way to set up a private registry with authorization for every Rancher-provisioned cluster. Therefore, if you want a Rancher-provisioned cluster to pull images from a private registry with credentials, you will have to [pass in the registry credentials through the advanced cluster options](#setting-a-private-registry-with-credentials-when-deploying-a-cluster) every time you create a new cluster. +There are two main ways to set up a private registry in Rancher: -## Setting a Private Registry with No Credentials as the Default Registry +* Set up a global default registry through the **Settings** tab in the global view. +* Set up a private registry in the advanced options under cluster-level settings. + +The global default registry is intended to be used in air-gapped setups, for registries that don't require credentials. The cluster-level private registry is intended to be used in setups where the private registry requires credentials. + +## Set a Private Registry with No Credentials as the Default Registry 1. Log into Rancher and configure the default administrator password. -1. Click **☰ > Global Settings**. -1. Go to the setting called `system-default-registry` and choose **⋮ > Edit Setting**. -1. Change the value to your registry (e.g. `registry.yourdomain.com:port`). Do not prefix the registry with `http://` or `https://`. +1. Select **☰ > Global Settings**. +1. Go to `system-default-registry` and choose **⋮ > Edit Setting**. +1. Enter your registry's hostname and port (e.g. `registry.yourdomain.com:port`). Do not prefix the text with `http://` or `https://`. -**Result:** Rancher will use your private registry to pull system images. +**Result:** Rancher pulls system images from your private registry. -## Setting a Private Registry with Credentials when Deploying a Cluster +### Namespaced Private Registry with RKE2 Downstream Clusters -You can follow these steps to configure a private registry when you create a cluster: +Most private registries should work, by default, with RKE2 downstream clusters. -1. Click **☰ > Cluster Management**. +However, you'll need to do some additional steps if you're trying to set a namespaced private registry whose URL is formated like this: `website/subdomain:portnumber`. + +1. Select **☰ > Cluster Management**. +1. Find the RKE2 cluster in the list and click **⋮ >Edit Config**. +1. From the **Cluster config** menu, select **Registries**. +1. In the **Registries** pane, select the **Configure advanced containerd mirroring and registry authentication options** option. +1. In the text fields under **Mirrors**, enter the **Registry Hostname** and **Mirror Endpoints**. +1. Click **Save**. +1. Repeat as necessary for each downstream RKE2 cluster. + +## Configure a Private Registry with Credentials when Creating a Cluster + +There is no global way to set up a private registry with authorization for every Rancher-provisioned cluster. Therefore, if you want a Rancher-provisioned cluster to pull images from a private registry that requires credentials, you'll have to pass the registry credentials through the advanced cluster options every time you create a new cluster. + +Since the private registry cannot be configured after the cluster is created, you'll need to perform these steps during initial cluster setup. + +1. Select **☰ > Cluster Management**. 1. On the **Clusters** page, click **Create**. 1. Choose a cluster type. -1. In the **Cluster Configuration** go to the **Registries** tab and click **Pull images for Rancher from a private registry**. +1. In the **Cluster Configuration** go to the **Registries** tab and select **Pull images for Rancher from a private registry**. 1. Enter the registry hostname and credentials. 1. Click **Create**. -**Result:** The new cluster will be able to pull images from the private registry. - -The private registry cannot be configured after the cluster is created. +**Result:** The new cluster pulls images from the private registry. From 6461455e1fc40af1dee4677c37e012adf6e1ca8e Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Mon, 3 Apr 2023 13:04:11 -0400 Subject: [PATCH 21/22] #77 Part 2: Addressing backlog issue about imported registered RKE2 clusters (#520) * #77 Addressing backlog issue about imported registered RKE2 clusters * Apply suggestions from code review Co-authored-by: Billy Tat --------- Co-authored-by: Billy Tat --- .../upgrade-and-roll-back-kubernetes.md | 3 ++- .../upgrade-and-roll-back-kubernetes.md | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md b/docs/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md index 825c1af2d68..444e0363d28 100644 --- a/docs/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md +++ b/docs/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md @@ -32,7 +32,8 @@ The restore operation will work on a cluster that is not in a healthy or active :::note Prerequisites: -- The options below are available only for [Rancher-launched RKE Kubernetes clusters](../../pages-for-subheaders/launch-kubernetes-with-rancher.md) and [Registered K3s Kubernetes clusters.](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#additional-features-for-registered-k3s-clusters) +- The options below are available for [Rancher-launched Kubernetes clusters](../../pages-for-subheaders/launch-kubernetes-with-rancher.md) and [Registered K3s Kubernetes clusters](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#additional-features-for-registered-k3s-clusters). +- The following options also apply to imported RKE2 clusters that you have registered. If you import a cluster from an external cloud platform but don't register it, you won't be able to upgrade the Kubernetes version from Rancher. - Before upgrading Kubernetes, [back up your cluster.](../../pages-for-subheaders/backup-restore-and-disaster-recovery.md) ::: diff --git a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md index 825c1af2d68..fc04b16b18d 100644 --- a/versioned_docs/version-2.6/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md +++ b/versioned_docs/version-2.6/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md @@ -32,7 +32,8 @@ The restore operation will work on a cluster that is not in a healthy or active :::note Prerequisites: -- The options below are available only for [Rancher-launched RKE Kubernetes clusters](../../pages-for-subheaders/launch-kubernetes-with-rancher.md) and [Registered K3s Kubernetes clusters.](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#additional-features-for-registered-k3s-clusters) +- The options below are available for [Rancher-launched Kubernetes clusters](../../pages-for-subheaders/launch-kubernetes-with-rancher.md) and [Registered K3s Kubernetes clusters.](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#additional-features-for-registered-k3s-clusters). +- The following options also apply to imported RKE2 clusters that you have registered. If you import a cluster from an external cloud platform but don't register it, you won't be able to upgrade the Kubernetes version from Rancher. - Before upgrading Kubernetes, [back up your cluster.](../../pages-for-subheaders/backup-restore-and-disaster-recovery.md) ::: From a5e3b54e32b0130272212ebf67357d731e123155 Mon Sep 17 00:00:00 2001 From: Marty Hernandez Avedon Date: Mon, 3 Apr 2023 14:47:47 -0400 Subject: [PATCH 22/22] #77 Addressing backlog issue about registered AKS clusters (#518) * #77 addressing backlog issue about registered AKS clusters * moved the acronym expansions forward and made some style edits * applied suggestions from code review --- .../register-existing-clusters.md | 20 +++++++------------ .../register-existing-clusters.md | 16 ++++++--------- 2 files changed, 13 insertions(+), 23 deletions(-) diff --git a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md index 38a2a0c2802..cfe6480695b 100644 --- a/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md +++ b/docs/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md @@ -17,9 +17,7 @@ For more information on RKE node roles, see the [best practices.](../../../pages ### Permissions -If your existing Kubernetes cluster already has a `cluster-admin` role defined, you must have this `cluster-admin` privilege to register the cluster in Rancher. - -In order to apply the privilege, you need to run: +To register a cluster in Rancher, you must have `cluster-admin` privileges within that cluster. If you don't, grant these privileges to your user by running: ```plain kubectl create clusterrolebinding cluster-admin-binding \ @@ -27,13 +25,11 @@ kubectl create clusterrolebinding cluster-admin-binding \ --user [USER_ACCOUNT] ``` -before running the `kubectl` command to register the cluster. - -By default, GKE users are not given this privilege, so you will need to run the command before registering GKE clusters. To learn more about role-based access control for GKE, please click [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control). +Since, by default, Google Kubernetes Engine (GKE) doesn't grant the `cluster-admin` role, you must run these commands on GKE clusters before you can register them. To learn more about role-based access control for GKE, please see [the official Google documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control). ### EKS, AKS and GKE Clusters -EKS, AKS and GKE clusters must have at least one managed node group to be imported into Rancher or provisioned from Rancher successfully. +To successfully import them into or provision them from Rancher, Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) clusters must have at least one managed node group. ## Registering a Cluster @@ -126,15 +122,13 @@ When an RKE2 or K3s cluster is registered in Rancher, Rancher will recognize it. - The ability to configure the maximum number of nodes that will be upgraded concurrently - The ability to see a read-only version of the cluster's configuration arguments and environment variables used to launch each node in the cluster -### Additional Features for Registered EKS, AKS and GKE Clusters +### Additional Features for Registered EKS, AKS, and GKE Clusters -Registering an Amazon EKS, Azure AKS or GKE cluster allows Rancher to treat it as though it were created in Rancher. +Rancher handles registered EKS, AKS, or GKE clusters similarly to clusters created in Rancher. However, Rancher doesn't destroy registered clusters when you delete them through the Rancher UI. -Amazon EKS, Azure AKS and GKE clusters can now be registered in Rancher. For the most part, these registered clusters are treated the same way as clusters created in the Rancher UI, except for deletion. +When you create an EKS, AKS, or GKE cluster in Rancher, then delete it, Rancher destroys the cluster. When you delete a registered cluster through Rancher, the Rancher server _disconnects_ from the cluster. The cluster remains live, although it's no longer in Rancher. You can still access the deregistered cluster in the same way you did before you registered it. -When you delete an EKS, AKS or GKE cluster that was created in Rancher, the cluster is destroyed. When you delete a cluster that was registered in Rancher, it is disconnected from the Rancher server, but it still exists, and you can still access it in the same way you did before it was registered in Rancher. - -The capabilities for registered clusters are listed in the table on [this page.](../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) +See [Cluster Management Capabilities by Cluster Type](../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) for more information about what features are available for managing registered clusters. ## Configuring RKE2 and K3s Cluster Upgrades diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md index fda6b8ed61d..51b989cb202 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md @@ -17,9 +17,7 @@ For more information on RKE node roles, see the [best practices.](../../../pages ### Permissions -If your existing Kubernetes cluster already has a `cluster-admin` role defined, you must have this `cluster-admin` privilege to register the cluster in Rancher. - -In order to apply the privilege, you need to run: +To register a cluster in Rancher, you must have `cluster-admin` privileges within that cluster. If you don't, grant these privileges to your user by running: ```plain kubectl create clusterrolebinding cluster-admin-binding \ @@ -29,7 +27,7 @@ kubectl create clusterrolebinding cluster-admin-binding \ before running the `kubectl` command to register the cluster. -By default, GKE users are not given this privilege, so you will need to run the command before registering GKE clusters. To learn more about role-based access control for GKE, please click [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control). +Since, by default, Google Kubernetes Engine (GKE) doesn't grant the `cluster-admin` role, you must run these commands on GKE clusters before you can register them. To learn more about role-based access control for GKE, please see [the official Google documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control). If you are registering a K3s cluster, make sure the `cluster.yml` is readable. It is protected by default. For details, refer to [Configuring a K3s cluster to enable importation to Rancher.](#configuring-a-k3s-cluster-to-enable-registration-in-rancher) @@ -142,15 +140,13 @@ When a K3s cluster is registered in Rancher, Rancher will recognize it as K3s. T - The ability to configure the maximum number of nodes that will be upgraded concurrently - The ability to see a read-only version of the K3s cluster's configuration arguments and environment variables used to launch each node in the cluster -### Additional Features for Registered EKS and GKE Clusters +### Additional Features for Registered EKS, AKS, and GKE Clusters -Registering an Amazon EKS cluster or GKE cluster allows Rancher to treat it as though it were created in Rancher. +Rancher handles registered EKS, AKS, or GKE clusters similarly to clusters created in Rancher. However, Rancher doesn't destroy registered clusters when you delete them through the Rancher UI. -Amazon EKS clusters and GKE clusters can now be registered in Rancher. For the most part, these registered clusters are treated the same way as clusters created in the Rancher UI, except for deletion. +When you create an EKS, AKS, or GKE cluster in Rancher, then delete it, Rancher destroys the cluster. When you delete a registered cluster through Rancher, the Rancher server _disconnects_ from the cluster. The cluster remains live, although it's no longer in Rancher. You can still access the deregistered cluster in the same way you did before you registered it. -When you delete an EKS cluster or GKE cluster that was created in Rancher, the cluster is destroyed. When you delete a cluster that was registered in Rancher, it is disconnected from the Rancher server, but it still exists and you can still access it in the same way you did before it was registered in Rancher. - -The capabilities for registered clusters are listed in the table on [this page.](../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) +See [Cluster Management Capabilities by Cluster Type](../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) for more information about what features are available for managing registered clusters. ## Configuring K3s Cluster Upgrades