From e5b68c910320d9afe7b7a1b0f4f2018c79278a6c Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Fri, 18 Jul 2025 13:19:04 -0700 Subject: [PATCH 01/57] rm RKE1 references: contribute-to-rancher --- docs/contribute-to-rancher.md | 22 ------------------- .../current/contribute-to-rancher.md | 22 ------------------- .../version-2.12/contribute-to-rancher.md | 22 ------------------- .../version-2.12/contribute-to-rancher.md | 22 ------------------- 4 files changed, 88 deletions(-) diff --git a/docs/contribute-to-rancher.md b/docs/contribute-to-rancher.md index 9b33f12368a..ee1fe70575f 100644 --- a/docs/contribute-to-rancher.md +++ b/docs/contribute-to-rancher.md @@ -39,7 +39,6 @@ User Interface | https://github.com/rancher/dashboard/ | This repository is the (Rancher) Docker Machine | https://github.com/rancher/machine | This repository is the source of the Docker Machine binary used when using Node Drivers. This is a fork of the `docker/machine` repository. machine-package | https://github.com/rancher/machine-package | This repository is used to build the Rancher Docker Machine binary. kontainer-engine | https://github.com/rancher/kontainer-engine | This repository is the source of kontainer-engine, the tool to provision hosted Kubernetes clusters. -RKE repository | https://github.com/rancher/rke | This repository is the source of Rancher Kubernetes Engine, the tool to provision Kubernetes clusters on any machine. CLI | https://github.com/rancher/cli | This repository is the source code for the Rancher CLI used in Rancher 2.x. (Rancher) Helm repository | https://github.com/rancher/helm | This repository is the source of the packaged Helm binary. This is a fork of the `helm/helm` repository. loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels. @@ -109,27 +108,6 @@ Please remove any sensitive data as it will be publicly viewable. -l app=rancher \ --timestamps=true ``` - - Docker install using `docker` on each of the nodes in the RKE cluster - - ``` - docker logs \ - --timestamps \ - $(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }') - ``` - - Kubernetes Install with RKE Add-On - - :::note - - Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` if the Rancher server is installed on a Kubernetes cluster) or are using the embedded kubectl via the UI. - - ::: - - ``` - kubectl -n cattle-system \ - logs \ - --timestamps=true \ - -f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name') - ``` - System logging (these might not all exist, depending on operating system) - `/var/log/messages` - `/var/log/syslog` diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/contribute-to-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/current/contribute-to-rancher.md index baff7b735b9..ea433c1f2be 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/contribute-to-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/contribute-to-rancher.md @@ -35,7 +35,6 @@ title: 参与 Rancher 社区贡献 | (Rancher) Docker Machine | https://github.com/rancher/machine | 使用主机驱动时使用的 Docker Machine 二进制文件的源码仓库。这是 `docker/machine` 仓库的一个 fork。 | | machine-package | https://github.com/rancher/machine-package | 用于构建 Rancher Docker Machine 二进制文件。 | | kontainer-engine | https://github.com/rancher/kontainer-engine | kontainer-engine 的源码仓库,它是配置托管 Kubernetes 集群的工具。 | -| RKE repository | https://github.com/rancher/rke | Rancher Kubernetes Engine 的源码仓库,该工具可在任何主机上配置 Kubernetes 集群。 | | CLI | https://github.com/rancher/cli | Rancher 2.x 中使用的 Rancher CLI 的源码仓库。 | | (Rancher) Helm repository | https://github.com/rancher/helm | 打包的 Helm 二进制文件的源码仓库。这是 `helm/helm` 仓库的一个 fork。 | | Telemetry repository | https://github.com/rancher/telemetry | Telemetry 二进制文件的源码仓库。 | @@ -106,27 +105,6 @@ title: 参与 Rancher 社区贡献 -l app=rancher \ --timestamps=true ``` - - 在 RKE 集群的每个节点上使用 `docker` 的 Docker 安装 - - ``` - docker logs \ - --timestamps \ - $(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }') - ``` - - 使用 RKE 附加组件的 Kubernetes 安装 - - :::note - - 确保你配置了正确的 kubeconfig(例如,如果 Rancher Server 安装在 Kubernetes 集群上,则 `export KUBECONFIG=$PWD/kube_config_cluster.yml`)或通过 UI 使用了嵌入式 kubectl。 - - ::: - - ``` - kubectl -n cattle-system \ - logs \ - --timestamps=true \ - -f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name') - ``` - 系统日志记录(可能不存在,取决于操作系统) - `/var/log/messages` - `/var/log/syslog` diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/contribute-to-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/contribute-to-rancher.md index baff7b735b9..ea433c1f2be 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/contribute-to-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/contribute-to-rancher.md @@ -35,7 +35,6 @@ title: 参与 Rancher 社区贡献 | (Rancher) Docker Machine | https://github.com/rancher/machine | 使用主机驱动时使用的 Docker Machine 二进制文件的源码仓库。这是 `docker/machine` 仓库的一个 fork。 | | machine-package | https://github.com/rancher/machine-package | 用于构建 Rancher Docker Machine 二进制文件。 | | kontainer-engine | https://github.com/rancher/kontainer-engine | kontainer-engine 的源码仓库,它是配置托管 Kubernetes 集群的工具。 | -| RKE repository | https://github.com/rancher/rke | Rancher Kubernetes Engine 的源码仓库,该工具可在任何主机上配置 Kubernetes 集群。 | | CLI | https://github.com/rancher/cli | Rancher 2.x 中使用的 Rancher CLI 的源码仓库。 | | (Rancher) Helm repository | https://github.com/rancher/helm | 打包的 Helm 二进制文件的源码仓库。这是 `helm/helm` 仓库的一个 fork。 | | Telemetry repository | https://github.com/rancher/telemetry | Telemetry 二进制文件的源码仓库。 | @@ -106,27 +105,6 @@ title: 参与 Rancher 社区贡献 -l app=rancher \ --timestamps=true ``` - - 在 RKE 集群的每个节点上使用 `docker` 的 Docker 安装 - - ``` - docker logs \ - --timestamps \ - $(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }') - ``` - - 使用 RKE 附加组件的 Kubernetes 安装 - - :::note - - 确保你配置了正确的 kubeconfig(例如,如果 Rancher Server 安装在 Kubernetes 集群上,则 `export KUBECONFIG=$PWD/kube_config_cluster.yml`)或通过 UI 使用了嵌入式 kubectl。 - - ::: - - ``` - kubectl -n cattle-system \ - logs \ - --timestamps=true \ - -f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name') - ``` - 系统日志记录(可能不存在,取决于操作系统) - `/var/log/messages` - `/var/log/syslog` diff --git a/versioned_docs/version-2.12/contribute-to-rancher.md b/versioned_docs/version-2.12/contribute-to-rancher.md index 9b33f12368a..ee1fe70575f 100644 --- a/versioned_docs/version-2.12/contribute-to-rancher.md +++ b/versioned_docs/version-2.12/contribute-to-rancher.md @@ -39,7 +39,6 @@ User Interface | https://github.com/rancher/dashboard/ | This repository is the (Rancher) Docker Machine | https://github.com/rancher/machine | This repository is the source of the Docker Machine binary used when using Node Drivers. This is a fork of the `docker/machine` repository. machine-package | https://github.com/rancher/machine-package | This repository is used to build the Rancher Docker Machine binary. kontainer-engine | https://github.com/rancher/kontainer-engine | This repository is the source of kontainer-engine, the tool to provision hosted Kubernetes clusters. -RKE repository | https://github.com/rancher/rke | This repository is the source of Rancher Kubernetes Engine, the tool to provision Kubernetes clusters on any machine. CLI | https://github.com/rancher/cli | This repository is the source code for the Rancher CLI used in Rancher 2.x. (Rancher) Helm repository | https://github.com/rancher/helm | This repository is the source of the packaged Helm binary. This is a fork of the `helm/helm` repository. loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels. @@ -109,27 +108,6 @@ Please remove any sensitive data as it will be publicly viewable. -l app=rancher \ --timestamps=true ``` - - Docker install using `docker` on each of the nodes in the RKE cluster - - ``` - docker logs \ - --timestamps \ - $(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }') - ``` - - Kubernetes Install with RKE Add-On - - :::note - - Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` if the Rancher server is installed on a Kubernetes cluster) or are using the embedded kubectl via the UI. - - ::: - - ``` - kubectl -n cattle-system \ - logs \ - --timestamps=true \ - -f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name') - ``` - System logging (these might not all exist, depending on operating system) - `/var/log/messages` - `/var/log/syslog` From c35593dd279daa4efe245c673851dcd807567e4b Mon Sep 17 00:00:00 2001 From: Harrison Affel Date: Tue, 22 Jul 2025 10:44:41 -0400 Subject: [PATCH 02/57] add documentation for new gce node driver Signed-off-by: Harrison Affel --- .../about-provisioning-drivers.md | 1 + .../create-a-google-compute-engine-cluster.md | 107 ++++++++++++++++++ .../machine-configuration/google-gce.md | 87 ++++++++++++++ sidebars.js | 3 + .../about-provisioning-drivers.md | 1 + .../create-a-google-compute-engine-cluster.md | 107 ++++++++++++++++++ .../machine-configuration/google-gce.md | 86 ++++++++++++++ versioned_sidebars/version-2.12-sidebars.json | 4 +- 8 files changed, 395 insertions(+), 1 deletion(-) create mode 100644 docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md create mode 100644 docs/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md create mode 100644 versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md create mode 100644 versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md diff --git a/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md b/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md index 2295d1cf196..63b78e20f4c 100644 --- a/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md +++ b/docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md @@ -49,3 +49,4 @@ Rancher supports several major cloud providers, but by default, these node drive There are several other node drivers that are disabled by default, but are packaged in Rancher: * [Harvester](../../../../integrations-in-rancher/harvester/overview.md#harvester-node-driver/), available as of Rancher v2.6.1 +* [Google GCE](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md), available as of Rancher v2.12.0 diff --git a/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md new file mode 100644 index 00000000000..b8f081601de --- /dev/null +++ b/docs/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md @@ -0,0 +1,107 @@ +--- +title: Creating a Google Compute Engine cluster +--- + + + + + + +In this section, you'll learn how to use Rancher to provision an [RKE2](https://docs.rke2.io/) Kubernetes cluster on the Google Cloud Platform (GCP) using Google Compute Engine (GCE) through Rancher. + + +First, you will enable the GCE node driver in the Rancher UI. Then, you follow the steps to create a GCP service account with the necessary permissions, and generate a JSON key file. This key file will be used to create a cloud credential in Rancher. + + +Then, you will create a GCE cluster in Rancher, and when configuring the cluster, you will define machine pools for it. Each machine pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE2 onto the new nodes, and it will set up each node with the Kubernetes role defined by the machine pool. + + +1. [Enable the GCE node driver](#1-enable-the-gce-node-driver) +1. [Create your cloud credential](#2-create-a-cloud-credential) +1. [Create a GCE cluster with your cloud credential](#3-create-a-cluster-using-the-cloud-credential) +1. [GCE Best Practices](#gce-best-practices) + +### Prerequisites + +1. A valid Google Cloud Platform account and project. +1. A GCP Service Account JSON key file. The service account associated with this key must have the following IAM roles: + 1. **Compute Admin** + 1. **Service Account User** + 1. **Viewer** +1. A VPC Network to provision VMs within. + +Refer to the [GCP documentation](https://cloud.google.com/iam/docs/service-account-overview) on creating and managing service account keys for more details. + + +### 1. Enable the GCE node driver + +The GCE node driver is not enabled by default in Rancher. You must enable it before you can provision GCE clusters, or work with GCE specific CRDs. + +1. Click **☰ > Cluster Management**. +1. On the left hand side, click **Drivers**. +1. Open the **Node Drivers** tab. +1. Find the **Google GCE** driver and select **⋮ > Activate**. + + +### 2. Create a cloud credential + +1. Click **☰ > Cluster Management**. +1. Click **Cloud Credentials**. +1. Click **Create**. +1. Click **Google**. +1. Enter your GCP Service Account JSON key file. +1. Click **Create**. + +**Result:** You have created the cloud credentials that will be used to provision nodes in your cluster. You can reuse these credentials in other clusters. Depending on the permissions granted to the service account, this credential may also be used for GKE clusters. + + +### 3. Create a cluster using the cloud credential + +1. Click **☰ > Cluster Management**. +1. On the **Clusters** page, click **Create**. +1. Click **Google GCE**. +1. Select a **Cloud Credential** and provide the GCP project to create the VM in. +1. Enter a **Cluster Name**. +1. Create a machine pool for each Kubernetes role. Refer to the [best practices](use-new-nodes-in-an-infra-provider.md#node-roles) for recommendations on role assignments and counts. + 1. For each machine pool, define the machine configuration. Refer to the [Google GCE machine configuration reference](../../../../reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md) for information on configuration options. +1. Use the **Cluster Configuration** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. For help configuring the cluster, refer to the [RKE2 cluster configuration reference.](../../../../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md) +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Click **Create**. + + +**Result:** + +Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active**. + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + +### GCE Best Practices + +#### External Firewall Rules, Open Ports, and ACE + +If the cluster being provisioned will utilize the [Authorized Cluster Endpoint (ACE) feature](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster), controlplane nodes must expose port `6443`. This port is not exposed in the default machine pool configuration to prevent it from being exposed across all cluster nodes, and to reduce the number of firewall rules created by Rancher. + +In order for ACE to work as expected, you must specify this port in the Rancher UI when configuring the controlplane machine pool by enabling the `Expose external ports` checkbox, under the `Show Advanced` section of the machine pool configuration UI. Alternatively, you may manually create a custom firewall rule in GCP and provide the related network tag in the controlplane machine-pool configuration. + +#### Internal Firewall Rules + +Rancher will automatically create a firewall rule and network tag to facilitate communication between cluster nodes internally within the specified VPC network. This rule will contain the minimum number of ports required to create an RKE2/K3s cluster. + +If you need to extend the number of ports exposed internally between cluster nodes, a new firewall rule should be manually created, and the associated network tag assigned to the relevant machine pools. If desired, the automatic creation of the internal firewall rule can be disabled for each given machine pool when creating or updating the cluster. + +#### Cross Network Deployments + +While it is possible to deploy different machine pools into different VPC networks, the internal firewall rule created by Rancher does not support this configuration by default. To create machine pools in different networks, additional firewall rules to facilitate communication between nodes in different networks must be manually created. + + +## Optional Next Steps + +After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster: + +- **Access your cluster with the kubectl CLI:** Follow [these steps](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#accessing-clusters-with-kubectl-from-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. +- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. diff --git a/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md b/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md new file mode 100644 index 00000000000..6c320f3b3d4 --- /dev/null +++ b/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md @@ -0,0 +1,87 @@ +--- +title: GCE Machine Configuration +--- + + + + + + +For more information about Google Cloud Platform (GCP) and the Google Compute Engine (GCE), refer to the official [GCP documentation](https://cloud.google.com/docs). + +### Zone + +The GCP Region and Zone that the VM will be deployed to. For example, `us-east1-b`. + +### Machine Image Project + +The image project that the desired image families belong to. + +### Machine Image Family + +The image family that the desired machine operating system belongs to. + +### Machine Image + +The operating system that will be installed onto the VM. + +### Disk Type + +The type of the disk attached to the VM. The available types may differ between regions. + +### Disk Size + +The size of the disk attached to the VM, in Gigabytes. + +### Machine Type + +The type of VM that will be deployed. Machine types determine the number of resources (vCPU, RAM, etc.) allocated for each node. + +### Network + +The VPC network that the VM will be created in. This value cannot be changed once the machine pool has been provisioned. + +### Subnet + +The VPC subnetwork tha the VM will be created in. This value cannot be changed once the machine pool has been provisioned. + +### Username + +A custom username set as the default user of the GCE VM. + +### External Address + +The desired external IP address for the GCE VM. + +### Scopes + +A list of OAuth2 scopes which allow the VM to access other GCP APIs. + +### Allow Internal Communication + +By default, a VPC firewall rule is automatically created to expose a fixed set of ports within the VPC to facilitate communication between cluster nodes. This behavior can be disabled on a per machine pool basis, when clicking the `Show Advanced` option and disabling the `Allow Internal Communication` checkbox. + +### Expose External ports + +A list of ports to be opened _externally_ to the wider internet. Open ports are defined at the machine pool level. Enabling this option will result in the automatic creation of a VPC firewall rule. This rule will be automatically deleted when the cluster or machine pool is deleted. + +### Network Tags + +Tags is a list of _network tags_, which can be used to associate preexisting Firewall Rules with all VMs within a machine pool. + +### Labels + +A comma seperated list of custom labels to be attached to all VMs within a given machine pool. Unlike Tags, Labels do not influence networking behavior and only serve to organize cloud resources. + +## Advanced Options + +When creating clusters via the Rancher UI some options are automatically configured for you. However, when creating machine config objects manually, you must ensure you properly configure the below fields. + +### external-firewall-rule-prefix + +A prefix that will be used when creating the firewall rule to expose ports publicly. Ideally, this should be a concatenation the machine pool name and the cluster name. This field must be set if the machine pool is configured to expose ports publicly, otherwise it can be omitted. + +### internal-firewall-rule-prefix + +A prefix that will be used when creating the internal firewall rule which allows for communication between nodes within the cluster. If this field is omitted, no internal firewall rule will be created. + diff --git a/sidebars.js b/sidebars.js index 6c39c1de6b4..b92debb7783 100644 --- a/sidebars.js +++ b/sidebars.js @@ -534,6 +534,8 @@ const sidebars = { "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster", "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-azure-cluster", + + "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster", { type: 'category', label: 'Creating a VMware vSphere Cluster', @@ -946,6 +948,7 @@ const sidebars = { "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/amazon-ec2", "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/digitalocean", "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/azure", + "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce" ] } ] diff --git a/versioned_docs/version-2.12/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md b/versioned_docs/version-2.12/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md index 2295d1cf196..63b78e20f4c 100644 --- a/versioned_docs/version-2.12/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md +++ b/versioned_docs/version-2.12/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md @@ -49,3 +49,4 @@ Rancher supports several major cloud providers, but by default, these node drive There are several other node drivers that are disabled by default, but are packaged in Rancher: * [Harvester](../../../../integrations-in-rancher/harvester/overview.md#harvester-node-driver/), available as of Rancher v2.6.1 +* [Google GCE](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md), available as of Rancher v2.12.0 diff --git a/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md b/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md new file mode 100644 index 00000000000..b8f081601de --- /dev/null +++ b/versioned_docs/version-2.12/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md @@ -0,0 +1,107 @@ +--- +title: Creating a Google Compute Engine cluster +--- + + + + + + +In this section, you'll learn how to use Rancher to provision an [RKE2](https://docs.rke2.io/) Kubernetes cluster on the Google Cloud Platform (GCP) using Google Compute Engine (GCE) through Rancher. + + +First, you will enable the GCE node driver in the Rancher UI. Then, you follow the steps to create a GCP service account with the necessary permissions, and generate a JSON key file. This key file will be used to create a cloud credential in Rancher. + + +Then, you will create a GCE cluster in Rancher, and when configuring the cluster, you will define machine pools for it. Each machine pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE2 onto the new nodes, and it will set up each node with the Kubernetes role defined by the machine pool. + + +1. [Enable the GCE node driver](#1-enable-the-gce-node-driver) +1. [Create your cloud credential](#2-create-a-cloud-credential) +1. [Create a GCE cluster with your cloud credential](#3-create-a-cluster-using-the-cloud-credential) +1. [GCE Best Practices](#gce-best-practices) + +### Prerequisites + +1. A valid Google Cloud Platform account and project. +1. A GCP Service Account JSON key file. The service account associated with this key must have the following IAM roles: + 1. **Compute Admin** + 1. **Service Account User** + 1. **Viewer** +1. A VPC Network to provision VMs within. + +Refer to the [GCP documentation](https://cloud.google.com/iam/docs/service-account-overview) on creating and managing service account keys for more details. + + +### 1. Enable the GCE node driver + +The GCE node driver is not enabled by default in Rancher. You must enable it before you can provision GCE clusters, or work with GCE specific CRDs. + +1. Click **☰ > Cluster Management**. +1. On the left hand side, click **Drivers**. +1. Open the **Node Drivers** tab. +1. Find the **Google GCE** driver and select **⋮ > Activate**. + + +### 2. Create a cloud credential + +1. Click **☰ > Cluster Management**. +1. Click **Cloud Credentials**. +1. Click **Create**. +1. Click **Google**. +1. Enter your GCP Service Account JSON key file. +1. Click **Create**. + +**Result:** You have created the cloud credentials that will be used to provision nodes in your cluster. You can reuse these credentials in other clusters. Depending on the permissions granted to the service account, this credential may also be used for GKE clusters. + + +### 3. Create a cluster using the cloud credential + +1. Click **☰ > Cluster Management**. +1. On the **Clusters** page, click **Create**. +1. Click **Google GCE**. +1. Select a **Cloud Credential** and provide the GCP project to create the VM in. +1. Enter a **Cluster Name**. +1. Create a machine pool for each Kubernetes role. Refer to the [best practices](use-new-nodes-in-an-infra-provider.md#node-roles) for recommendations on role assignments and counts. + 1. For each machine pool, define the machine configuration. Refer to the [Google GCE machine configuration reference](../../../../reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md) for information on configuration options. +1. Use the **Cluster Configuration** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. For help configuring the cluster, refer to the [RKE2 cluster configuration reference.](../../../../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md) +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Click **Create**. + + +**Result:** + +Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active**. + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + +### GCE Best Practices + +#### External Firewall Rules, Open Ports, and ACE + +If the cluster being provisioned will utilize the [Authorized Cluster Endpoint (ACE) feature](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster), controlplane nodes must expose port `6443`. This port is not exposed in the default machine pool configuration to prevent it from being exposed across all cluster nodes, and to reduce the number of firewall rules created by Rancher. + +In order for ACE to work as expected, you must specify this port in the Rancher UI when configuring the controlplane machine pool by enabling the `Expose external ports` checkbox, under the `Show Advanced` section of the machine pool configuration UI. Alternatively, you may manually create a custom firewall rule in GCP and provide the related network tag in the controlplane machine-pool configuration. + +#### Internal Firewall Rules + +Rancher will automatically create a firewall rule and network tag to facilitate communication between cluster nodes internally within the specified VPC network. This rule will contain the minimum number of ports required to create an RKE2/K3s cluster. + +If you need to extend the number of ports exposed internally between cluster nodes, a new firewall rule should be manually created, and the associated network tag assigned to the relevant machine pools. If desired, the automatic creation of the internal firewall rule can be disabled for each given machine pool when creating or updating the cluster. + +#### Cross Network Deployments + +While it is possible to deploy different machine pools into different VPC networks, the internal firewall rule created by Rancher does not support this configuration by default. To create machine pools in different networks, additional firewall rules to facilitate communication between nodes in different networks must be manually created. + + +## Optional Next Steps + +After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster: + +- **Access your cluster with the kubectl CLI:** Follow [these steps](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#accessing-clusters-with-kubectl-from-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. +- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. diff --git a/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md b/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md new file mode 100644 index 00000000000..0960df5f441 --- /dev/null +++ b/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md @@ -0,0 +1,86 @@ +--- +title: GCE Machine Configuration +--- + + + + + + +For more information about Google Cloud Platform (GCP) and the Google Compute Engine (GCE), refer to the official [GCP documentation](https://cloud.google.com/docs). + +### Zone + +The GCP Region and Zone that the VM will be deployed to. For example, `us-east1-b`. + +### Machine Image Project + +The image project that the desired image families belong to. + +### Machine Image Family + +The image family that the desired machine operating system belongs to. + +### Machine Image + +The operating system that will be installed onto the VM. + +### Disk Type + +The type of the disk attached to the VM. The available types may differ between regions. + +### Disk Size + +The size of the disk attached to the VM, in Gigabytes. + +### Machine Type + +The type of VM that will be deployed. Machine types determine the number of resources (vCPU, RAM, etc.) allocated for each node. + +### Network + +The VPC network that the VM will be created in. This value cannot be changed once the machine pool has been provisioned. + +### Subnet + +The VPC subnetwork tha the VM will be created in. This value cannot be changed once the machine pool has been provisioned. + +### Username + +A custom username set as the default user of the GCE VM. + +### External Address + +The desired external IP address for the GCE VM. + +### Scopes + +A list of OAuth2 scopes which allow the VM to access other GCP APIs. + +### Allow Internal Communication + +By default, a VPC firewall rule is automatically created to expose a fixed set of ports within the VPC to facilitate communication between cluster nodes. This behavior can be disabled on a per machine pool basis, when clicking the `Show Advanced` option and disabling the `Allow Internal Communication` checkbox. + +### Expose External ports + +A list of ports to be opened _externally_ to the wider internet. Open ports are defined at the machine pool level. Enabling this option will result in the automatic creation of a VPC firewall rule. This rule will be automatically deleted when the cluster or machine pool is deleted. + +### Network Tags + +Tags is a list of _network tags_, which can be used to associate preexisting Firewall Rules with all VMs within a machine pool. + +### Labels + +A comma seperated list of custom labels to be attached to all VMs within a given machine pool. Unlike Tags, Labels do not influence networking behavior and only serve to organize cloud resources. + +## Advanced Options + +When creating clusters via the Rancher UI some options are automatically configured for you. However, when creating machine config objects manually, you must ensure you properly configure the below fields. + +### external-firewall-rule-prefix + +A prefix that will be used when creating the firewall rule to expose ports publicly. Ideally, this should be a concatenation the machine pool name and the cluster name. This field must be set if the machine pool is configured to expose ports publicly, otherwise it can be omitted. + +### internal-firewall-rule-prefix + +A prefix that will be used when creating the internal firewall rule which allows for communication between nodes within the cluster. If this field is omitted, no internal firewall rule will be created. diff --git a/versioned_sidebars/version-2.12-sidebars.json b/versioned_sidebars/version-2.12-sidebars.json index 1aa118c964e..4835f69ff42 100644 --- a/versioned_sidebars/version-2.12-sidebars.json +++ b/versioned_sidebars/version-2.12-sidebars.json @@ -504,6 +504,7 @@ "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster", "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster", "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-azure-cluster", + "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster", { "type": "category", "label": "Creating a VMware vSphere Cluster", @@ -909,7 +910,8 @@ "items": [ "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/amazon-ec2", "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/digitalocean", - "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/azure" + "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/azure", + "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce" ] } ] From 28d6a8ff1c519884f2f99d398ca5216558cb5f1d Mon Sep 17 00:00:00 2001 From: Andreas Kupries Date: Tue, 24 Jun 2025 14:56:05 +0200 Subject: [PATCH 03/57] Add Tokens example workflows page --- docs/api/workflows/tokens.md | 130 +++++++++++++++++++++++++++++++++++ 1 file changed, 130 insertions(+) create mode 100644 docs/api/workflows/tokens.md diff --git a/docs/api/workflows/tokens.md b/docs/api/workflows/tokens.md new file mode 100644 index 00000000000..d41611f203f --- /dev/null +++ b/docs/api/workflows/tokens.md @@ -0,0 +1,130 @@ +--- +title: Tokens +--- + + + + + +## Feature Flag + +The Tokens Public API is available since Rancher v2.12.0 and is enabled by default. +It can be disabled by setting the `ext-tokens` feature flag to `false`. + +```sh +kubectl patch feature ext-tokens -p '{"spec":{"value":false}}' +``` + +## Creating a Token + +Only a **valid and active** Rancher user can create a Token. + +```bash +kubectl create -o jsonpath='{.status.value}' -f -< Date: Tue, 24 Jun 2025 17:21:36 +0200 Subject: [PATCH 04/57] Apply suggestions from code review Co-authored-by: Petr Kovar --- docs/api/workflows/tokens.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/api/workflows/tokens.md b/docs/api/workflows/tokens.md index d41611f203f..e2957dd56a2 100644 --- a/docs/api/workflows/tokens.md +++ b/docs/api/workflows/tokens.md @@ -35,9 +35,9 @@ The default is empty. The `spec.kind` field can be set to the kind of token. The value "session" indicates a login token. -All other kinds, including the default (empy string) indicate some kind of derived token. +All other kinds, including the default (empty string) indicate some kind of derived token. -The `name` and `generateName` fields of the new token are ignored. The system automatically choosen a name using the prefix `token-`. +The `name` and `generateName` fields of the new token are ignored. The system automatically chooses a name using the prefix `token-`. ```bash kubectl create -o jsonpath='{.status.value}' -f -< Date: Thu, 26 Jun 2025 14:57:09 +0200 Subject: [PATCH 05/57] Apply suggestions from code review Co-authored-by: Peter Matseykanets --- docs/api/workflows/tokens.md | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/docs/api/workflows/tokens.md b/docs/api/workflows/tokens.md index e2957dd56a2..cd8d04b6827 100644 --- a/docs/api/workflows/tokens.md +++ b/docs/api/workflows/tokens.md @@ -37,7 +37,7 @@ The `spec.kind` field can be set to the kind of token. The value "session" indicates a login token. All other kinds, including the default (empty string) indicate some kind of derived token. -The `name` and `generateName` fields of the new token are ignored. The system automatically chooses a name using the prefix `token-`. +The `metadata.name` and `metadata.generateName` fields are ignored and the name of the new Token is automatically generated using the prefix `token-`. ```bash kubectl create -o jsonpath='{.status.value}' -f -< Date: Mon, 14 Jul 2025 10:26:37 +0200 Subject: [PATCH 06/57] Apply suggestions from code review Co-authored-by: Sunil Singh --- docs/api/workflows/tokens.md | 30 ++++++++++-------------------- 1 file changed, 10 insertions(+), 20 deletions(-) diff --git a/docs/api/workflows/tokens.md b/docs/api/workflows/tokens.md index cd8d04b6827..24ec7f2c32b 100644 --- a/docs/api/workflows/tokens.md +++ b/docs/api/workflows/tokens.md @@ -8,8 +8,7 @@ title: Tokens ## Feature Flag -The Tokens Public API is available since Rancher v2.12.0 and is enabled by default. -It can be disabled by setting the `ext-tokens` feature flag to `false`. +The Tokens Public API is available for Rancher v2.12.0 and later, and is enabled by default. It can be disabled by setting the `ext-tokens` feature flag to `false` as shown in the example `kubectl` command below: ```sh kubectl patch feature ext-tokens -p '{"spec":{"value":false}}' @@ -17,7 +16,7 @@ kubectl patch feature ext-tokens -p '{"spec":{"value":false}}' ## Creating a Token -Only a **valid and active** Rancher user can create a Token. +Only a **valid and active** Rancher user can create a Token, otherwise you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token. ```bash kubectl create -o jsonpath='{.status.value}' -f -< Date: Mon, 14 Jul 2025 12:48:51 +0200 Subject: [PATCH 07/57] address comments, tab indentation --- docs/api/workflows/tokens.md | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/docs/api/workflows/tokens.md b/docs/api/workflows/tokens.md index 24ec7f2c32b..6b19112f299 100644 --- a/docs/api/workflows/tokens.md +++ b/docs/api/workflows/tokens.md @@ -34,25 +34,25 @@ A Token is always created for the user making the request. Attempting to create - The `metadata.name` and `metadata.generateName` fields are ignored and the name of the new Token is automatically generated using the prefix `token-`. -```bash -kubectl create -o jsonpath='{.status.value}' -f -< Date: Mon, 14 Jul 2025 12:56:22 +0200 Subject: [PATCH 08/57] added to sidebar, and imported into 2.12 setup --- sidebars.js | 3 +- .../version-2.12/api/workflows/tokens.md | 114 ++++++++++++++++++ versioned_sidebars/version-2.12-sidebars.json | 3 +- 3 files changed, 118 insertions(+), 2 deletions(-) create mode 100644 versioned_docs/version-2.12/api/workflows/tokens.md diff --git a/sidebars.js b/sidebars.js index a522e28d011..e8104011c45 100644 --- a/sidebars.js +++ b/sidebars.js @@ -1345,7 +1345,8 @@ const sidebars = { "label": "Example Workflows", "items": [ "api/workflows/projects", - "api/workflows/kubeconfigs" + "api/workflows/kubeconfigs", + "api/workflows/tokens" ] }, "api/api-reference", diff --git a/versioned_docs/version-2.12/api/workflows/tokens.md b/versioned_docs/version-2.12/api/workflows/tokens.md new file mode 100644 index 00000000000..6b19112f299 --- /dev/null +++ b/versioned_docs/version-2.12/api/workflows/tokens.md @@ -0,0 +1,114 @@ +--- +title: Tokens +--- + + + + + +## Feature Flag + +The Tokens Public API is available for Rancher v2.12.0 and later, and is enabled by default. It can be disabled by setting the `ext-tokens` feature flag to `false` as shown in the example `kubectl` command below: + +```sh +kubectl patch feature ext-tokens -p '{"spec":{"value":false}}' +``` + +## Creating a Token + +Only a **valid and active** Rancher user can create a Token, otherwise you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token. + +```bash +kubectl create -o jsonpath='{.status.value}' -f -< Date: Wed, 16 Jul 2025 09:49:41 +0200 Subject: [PATCH 09/57] Apply suggestions from code review Thanks for the review and corrections. Co-authored-by: Lucas Saintarbor --- docs/api/workflows/tokens.md | 7 +++---- versioned_docs/version-2.12/api/workflows/tokens.md | 7 +++---- 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/docs/api/workflows/tokens.md b/docs/api/workflows/tokens.md index 6b19112f299..517657932dd 100644 --- a/docs/api/workflows/tokens.md +++ b/docs/api/workflows/tokens.md @@ -76,7 +76,7 @@ token-6fzgj user-jtghh 90d 22s box token-8nbrm user-jtghh 90d 20s jinx ``` -#### Viewing a Token +## Viewing a Token Admins can get any Token, while regular users can only get their own. @@ -94,7 +94,7 @@ NAME USER KIND TTL AGE DESCRIPTION token-chjc9 user-jtghh 90d 24s example ``` -#### Deleting a Token +## Deleting a Token Admins can delete any Token, while regular users can only delete their own. @@ -103,8 +103,7 @@ kubectl delete tokens.ext.cattle.io token-chjc9 token.ext.cattle.io "token-chjc9" deleted ``` - -#### Updating a Token +## Updating a Token Only the metadata fields `spec.description`, `spec.ttl`, and `spec.enabled` can be updated. All other `spec` fields are immutable. Admins are able to extend the `spec.ttl` field, while regular users can only reduce the value. diff --git a/versioned_docs/version-2.12/api/workflows/tokens.md b/versioned_docs/version-2.12/api/workflows/tokens.md index 6b19112f299..517657932dd 100644 --- a/versioned_docs/version-2.12/api/workflows/tokens.md +++ b/versioned_docs/version-2.12/api/workflows/tokens.md @@ -76,7 +76,7 @@ token-6fzgj user-jtghh 90d 22s box token-8nbrm user-jtghh 90d 20s jinx ``` -#### Viewing a Token +## Viewing a Token Admins can get any Token, while regular users can only get their own. @@ -94,7 +94,7 @@ NAME USER KIND TTL AGE DESCRIPTION token-chjc9 user-jtghh 90d 24s example ``` -#### Deleting a Token +## Deleting a Token Admins can delete any Token, while regular users can only delete their own. @@ -103,8 +103,7 @@ kubectl delete tokens.ext.cattle.io token-chjc9 token.ext.cattle.io "token-chjc9" deleted ``` - -#### Updating a Token +## Updating a Token Only the metadata fields `spec.description`, `spec.ttl`, and `spec.enabled` can be updated. All other `spec` fields are immutable. Admins are able to extend the `spec.ttl` field, while regular users can only reduce the value. From f46a365c008aaefa5336112551aa405e1e4c0b5d Mon Sep 17 00:00:00 2001 From: Andreas Kupries Date: Wed, 23 Jul 2025 10:59:03 +0200 Subject: [PATCH 10/57] Apply suggestions from code review Co-authored-by: Lucas Saintarbor --- docs/api/workflows/tokens.md | 16 ++++++++-------- .../version-2.12/api/workflows/tokens.md | 16 ++++++++-------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/docs/api/workflows/tokens.md b/docs/api/workflows/tokens.md index 517657932dd..6387ed01972 100644 --- a/docs/api/workflows/tokens.md +++ b/docs/api/workflows/tokens.md @@ -8,7 +8,7 @@ title: Tokens ## Feature Flag -The Tokens Public API is available for Rancher v2.12.0 and later, and is enabled by default. It can be disabled by setting the `ext-tokens` feature flag to `false` as shown in the example `kubectl` command below: +The Tokens Public API is available for Rancher v2.12.0 and later, and is enabled by default. You can disable the Tokens Public API by setting the `ext-tokens` feature flag to `false` as shown in the example `kubectl` command below: ```sh kubectl patch feature ext-tokens -p '{"spec":{"value":false}}' @@ -16,7 +16,7 @@ kubectl patch feature ext-tokens -p '{"spec":{"value":false}}' ## Creating a Token -Only a **valid and active** Rancher user can create a Token, otherwise you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token. +Only a **valid and active** Rancher user can create a Token. Otherwise, you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token. ```bash kubectl create -o jsonpath='{.status.value}' -f -< Date: Wed, 23 Jul 2025 11:02:08 +0200 Subject: [PATCH 11/57] address comment. fix unclosed example --- docs/api/workflows/tokens.md | 1 + versioned_docs/version-2.12/api/workflows/tokens.md | 1 + 2 files changed, 2 insertions(+) diff --git a/docs/api/workflows/tokens.md b/docs/api/workflows/tokens.md index 6387ed01972..a84e3a7255d 100644 --- a/docs/api/workflows/tokens.md +++ b/docs/api/workflows/tokens.md @@ -111,3 +111,4 @@ An example `kubectl` command to edit a Token: ```sh kubectl edit tokens.ext.cattle.io token-zp786 +``` diff --git a/versioned_docs/version-2.12/api/workflows/tokens.md b/versioned_docs/version-2.12/api/workflows/tokens.md index 6387ed01972..a84e3a7255d 100644 --- a/versioned_docs/version-2.12/api/workflows/tokens.md +++ b/versioned_docs/version-2.12/api/workflows/tokens.md @@ -111,3 +111,4 @@ An example `kubectl` command to edit a Token: ```sh kubectl edit tokens.ext.cattle.io token-zp786 +``` From 8982e043eac59555f46514987b57e1083ef8e1ac Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 13:20:43 -0700 Subject: [PATCH 12/57] Remove RKE1 references in disconnected-clusters.md --- .../rancher-managed-clusters/disconnected-clusters.md | 2 +- .../rancher-managed-clusters/disconnected-clusters.md | 2 +- .../rancher-managed-clusters/disconnected-clusters.md | 2 +- .../rancher-managed-clusters/disconnected-clusters.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md b/docs/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md index c3c9b7a732d..e0b9eb6757f 100644 --- a/docs/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md +++ b/docs/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md @@ -16,4 +16,4 @@ While a managed cluster is disconnected from Rancher, management operations will - **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time. -- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE/RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime. +- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md index c3c9b7a732d..e0b9eb6757f 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md @@ -16,4 +16,4 @@ While a managed cluster is disconnected from Rancher, management operations will - **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time. -- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE/RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime. +- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md index c3c9b7a732d..e0b9eb6757f 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md @@ -16,4 +16,4 @@ While a managed cluster is disconnected from Rancher, management operations will - **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time. -- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE/RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime. +- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime. diff --git a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md index c3c9b7a732d..e0b9eb6757f 100644 --- a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md +++ b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters.md @@ -16,4 +16,4 @@ While a managed cluster is disconnected from Rancher, management operations will - **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time. -- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE/RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime. +- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime. From e25fad5cfddb5225664a8d8200838957f95d4a28 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 13:50:05 -0700 Subject: [PATCH 13/57] Remove RKE1 references in logging-best-practices.md --- .../rancher-managed-clusters/logging-best-practices.md | 4 +--- .../rancher-managed-clusters/logging-best-practices.md | 4 +--- .../rancher-managed-clusters/logging-best-practices.md | 4 +--- .../rancher-managed-clusters/logging-best-practices.md | 4 +--- 4 files changed, 4 insertions(+), 12 deletions(-) diff --git a/docs/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md b/docs/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md index dc200ae62b1..7fda8ae4c35 100644 --- a/docs/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md +++ b/docs/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md @@ -30,9 +30,7 @@ Once you have created these _ClusterOutput_ objects, create a _ClusterFlow_ to c ### Kubernetes Components -_ClusterFlows_ have the ability to collect logs from all containers on all hosts in the Kubernetes cluster. This works well in cases where those containers are part of a Kubernetes pod; however, RKE containers exist outside of the scope of Kubernetes. - -Currently the logs from RKE containers are collected, but are not able to easily be filtered. This is because those logs do not contain information as to the source container (e.g. `etcd` or `kube-apiserver`). +_ClusterFlows_ have the ability to collect logs from all containers on all hosts in the Kubernetes cluster. This works well in cases where those containers are part of a Kubernetes pod. A future release of Rancher will include the source container name which will enable filtering of these component logs. Once that change is made, you will be able to customize a _ClusterFlow_ to retrieve **only** the Kubernetes component logs, and direct them to an appropriate output. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md index afc1247854f..64d58392251 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md @@ -26,9 +26,7 @@ Rancher Logging 使用的是 [Logging Operator](https://github.com/kube-logging/ ### Kubernetes 组件 -_ClusterFlows_ 能够收集 Kubernetes 集群中所有主机上所有容器的日志。如果这些容器包含在 Kubernetes Pod 中,这个方法是适用的。但是,RKE 容器不存在于 Kubernetes 内。 - -目前,Rancher 能搜集 RKE 容器的日志,但不能轻易过滤。这是因为这些日志不包含源容器的信息(例如 `etcd` 或 `kube-apiserver`)。 +_ClusterFlows_ 能够收集 Kubernetes 集群中所有主机上所有容器的日志。如果这些容器包含在 Kubernetes Pod 中,这个方法是适用的。 Rancher 的未来版本将包含源容器名称,来支持过滤这些组件的日志。该功能实现之后,你将能够自定义 _ClusterFlow_ 来**仅**检索 Kubernetes 组件日志,并将日志发送到适当的输出位置。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md index afc1247854f..64d58392251 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md @@ -26,9 +26,7 @@ Rancher Logging 使用的是 [Logging Operator](https://github.com/kube-logging/ ### Kubernetes 组件 -_ClusterFlows_ 能够收集 Kubernetes 集群中所有主机上所有容器的日志。如果这些容器包含在 Kubernetes Pod 中,这个方法是适用的。但是,RKE 容器不存在于 Kubernetes 内。 - -目前,Rancher 能搜集 RKE 容器的日志,但不能轻易过滤。这是因为这些日志不包含源容器的信息(例如 `etcd` 或 `kube-apiserver`)。 +_ClusterFlows_ 能够收集 Kubernetes 集群中所有主机上所有容器的日志。如果这些容器包含在 Kubernetes Pod 中,这个方法是适用的。 Rancher 的未来版本将包含源容器名称,来支持过滤这些组件的日志。该功能实现之后,你将能够自定义 _ClusterFlow_ 来**仅**检索 Kubernetes 组件日志,并将日志发送到适当的输出位置。 diff --git a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md index dc200ae62b1..7fda8ae4c35 100644 --- a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md +++ b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/logging-best-practices.md @@ -30,9 +30,7 @@ Once you have created these _ClusterOutput_ objects, create a _ClusterFlow_ to c ### Kubernetes Components -_ClusterFlows_ have the ability to collect logs from all containers on all hosts in the Kubernetes cluster. This works well in cases where those containers are part of a Kubernetes pod; however, RKE containers exist outside of the scope of Kubernetes. - -Currently the logs from RKE containers are collected, but are not able to easily be filtered. This is because those logs do not contain information as to the source container (e.g. `etcd` or `kube-apiserver`). +_ClusterFlows_ have the ability to collect logs from all containers on all hosts in the Kubernetes cluster. This works well in cases where those containers are part of a Kubernetes pod. A future release of Rancher will include the source container name which will enable filtering of these component logs. Once that change is made, you will be able to customize a _ClusterFlow_ to retrieve **only** the Kubernetes component logs, and direct them to an appropriate output. From d553041102403a082bb36d874f3851f220cc6d9b Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 14:24:56 -0700 Subject: [PATCH 14/57] Remove RKE1 references in tips-for-running-rancher.md --- .../best-practices/rancher-server/tips-for-running-rancher.md | 3 --- .../best-practices/rancher-server/tips-for-running-rancher.md | 4 ---- .../best-practices/rancher-server/tips-for-running-rancher.md | 4 ---- .../best-practices/rancher-server/tips-for-running-rancher.md | 3 --- 4 files changed, 14 deletions(-) diff --git a/docs/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md b/docs/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md index c8e12b81efe..877b5084bf9 100644 --- a/docs/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md +++ b/docs/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md @@ -54,9 +54,6 @@ Consider the following recommendations based on your needs: ### Make sure nodes are configured correctly for Kubernetes It's important to follow K8s and etcd best practices when deploying your nodes, including disabling swap, double checking you have full network connectivity between all machines in the cluster, using unique hostnames, MAC addresses, and product_uuids for every node, checking that all correct ports are opened, and deploying with ssd backed etcd. More details can be found in the [kubernetes docs](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) and [etcd's performance op guide](https://etcd.io/docs/v3.5/op-guide/performance/). -### When using RKE: Back up the Statefile -RKE keeps record of the cluster state in a file called `cluster.rkestate`. This file is important for the recovery of a cluster and/or the continued maintenance of the cluster through RKE. Because this file contains certificate material, we strongly recommend encrypting this file before backing up. After each run of `rke up` you should backup the state file. - ### Run All Nodes in the Cluster in the Same Datacenter For best performance, run all three of your nodes in the same geographic datacenter. If you are running nodes in the cloud, such as AWS, run each node in a separate Availability Zone. For example, launch node 1 in us-west-2a, node 2 in us-west-2b, and node 3 in us-west-2c. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md index df057a54983..3e7a33d3e88 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md @@ -18,10 +18,6 @@ title: Rancher 运行技巧 在部署节点时,请遵循 K8s 和 etcd 的最佳实践,其中包括禁用 swap,检查集群中的所有主机之间是否有良好的网络连接,为每个节点使用唯一的主机名、MAC 地址和 `product_uuids`,检查所需端口是否已经打开,并使用配置 SSD 的 etcd 进行部署。详情请参见 [kubernetes 官方文档](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin)和 [etcd 性能操作指南](https://etcd.io/docs/v3.5/op-guide/performance/)。 -## 使用 RKE 时:备份状态文件(Statefile) - -RKE 将集群状态记录在一个名为 `cluster.rkestate` 的文件中,该文件对集群的恢复和/或通过 RKE 维护集群非常重要。由于这个文件包含证书材料,我们强烈建议在备份前对该文件进行加密。请在每次运行 `rke up` 后备份状态文件。 - ## 在同一个数据中心运行集群中的所有节点 为达到最佳性能,请在同一地理数据中心运行所有三个节点。如果你在云(如 AWS)上运行节点,请在不同的可用区(AZ)中运行这三个节点。例如,在 us-west-2a 中运行节点 1,在 us-west-2b 中运行节点 2,在 us-west-2c 中运行节点 3。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md index df057a54983..3e7a33d3e88 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md @@ -18,10 +18,6 @@ title: Rancher 运行技巧 在部署节点时,请遵循 K8s 和 etcd 的最佳实践,其中包括禁用 swap,检查集群中的所有主机之间是否有良好的网络连接,为每个节点使用唯一的主机名、MAC 地址和 `product_uuids`,检查所需端口是否已经打开,并使用配置 SSD 的 etcd 进行部署。详情请参见 [kubernetes 官方文档](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin)和 [etcd 性能操作指南](https://etcd.io/docs/v3.5/op-guide/performance/)。 -## 使用 RKE 时:备份状态文件(Statefile) - -RKE 将集群状态记录在一个名为 `cluster.rkestate` 的文件中,该文件对集群的恢复和/或通过 RKE 维护集群非常重要。由于这个文件包含证书材料,我们强烈建议在备份前对该文件进行加密。请在每次运行 `rke up` 后备份状态文件。 - ## 在同一个数据中心运行集群中的所有节点 为达到最佳性能,请在同一地理数据中心运行所有三个节点。如果你在云(如 AWS)上运行节点,请在不同的可用区(AZ)中运行这三个节点。例如,在 us-west-2a 中运行节点 1,在 us-west-2b 中运行节点 2,在 us-west-2c 中运行节点 3。 diff --git a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md index c8e12b81efe..877b5084bf9 100644 --- a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md +++ b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-server/tips-for-running-rancher.md @@ -54,9 +54,6 @@ Consider the following recommendations based on your needs: ### Make sure nodes are configured correctly for Kubernetes It's important to follow K8s and etcd best practices when deploying your nodes, including disabling swap, double checking you have full network connectivity between all machines in the cluster, using unique hostnames, MAC addresses, and product_uuids for every node, checking that all correct ports are opened, and deploying with ssd backed etcd. More details can be found in the [kubernetes docs](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) and [etcd's performance op guide](https://etcd.io/docs/v3.5/op-guide/performance/). -### When using RKE: Back up the Statefile -RKE keeps record of the cluster state in a file called `cluster.rkestate`. This file is important for the recovery of a cluster and/or the continued maintenance of the cluster through RKE. Because this file contains certificate material, we strongly recommend encrypting this file before backing up. After each run of `rke up` you should backup the state file. - ### Run All Nodes in the Cluster in the Same Datacenter For best performance, run all three of your nodes in the same geographic datacenter. If you are running nodes in the cloud, such as AWS, run each node in a separate Availability Zone. For example, launch node 1 in us-west-2a, node 2 in us-west-2b, and node 3 in us-west-2c. From 14249c7cb29cb78006ad6b359fbfa2369681394d Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 14:33:33 -0700 Subject: [PATCH 15/57] Remove RKE1 references in tuning-and-best-practices-for-rancher-at-scale.md --- .../tuning-and-best-practices-for-rancher-at-scale.md | 2 +- .../tuning-and-best-practices-for-rancher-at-scale.md | 2 +- .../tuning-and-best-practices-for-rancher-at-scale.md | 2 +- .../tuning-and-best-practices-for-rancher-at-scale.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md b/docs/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md index a760bbebff1..87a74b259c2 100644 --- a/docs/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md +++ b/docs/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md @@ -66,7 +66,7 @@ You should remove any remaining legacy apps that appear in the Cluster Manager U ### Using the Authorized Cluster Endpoint (ACE) -An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE, RKE2, and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) for more information and configuration instructions. +An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE2 and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) for more information and configuration instructions. ### Reducing Event Handler Executions diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md index 154b01c5ad4..68ae57eae46 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md @@ -76,7 +76,7 @@ Rancher 使用两个 Kubernetes 应用程序资源:`apps.projects.cattle.io` ### 使用授权集群端点 (ACE) -[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点) (ACE) 提供了 Rancher 部署的 RKE、RKE2 和 K3s 集群的 Kubernetes API 访问。启用后,ACE 会为生成的 `kubeconfig` 文件配置直接访问下游集群 Endpoint,从而绕过 Rancher 代理。在可以直接访问下游集群 Kubernetes API 的场景下,可以减少 Rancher 负载。有关更多信息,请参阅[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)配置说明。 +[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点) (ACE) 提供了 Rancher 部署的 RKE2 和 K3s 集群的 Kubernetes API 访问。启用后,ACE 会为生成的 `kubeconfig` 文件配置直接访问下游集群 Endpoint,从而绕过 Rancher 代理。在可以直接访问下游集群 Kubernetes API 的场景下,可以减少 Rancher 负载。有关更多信息,请参阅[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)配置说明。 ### 减少 Event Handler 执行 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md index 154b01c5ad4..68ae57eae46 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md @@ -76,7 +76,7 @@ Rancher 使用两个 Kubernetes 应用程序资源:`apps.projects.cattle.io` ### 使用授权集群端点 (ACE) -[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点) (ACE) 提供了 Rancher 部署的 RKE、RKE2 和 K3s 集群的 Kubernetes API 访问。启用后,ACE 会为生成的 `kubeconfig` 文件配置直接访问下游集群 Endpoint,从而绕过 Rancher 代理。在可以直接访问下游集群 Kubernetes API 的场景下,可以减少 Rancher 负载。有关更多信息,请参阅[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)配置说明。 +[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点) (ACE) 提供了 Rancher 部署的 RKE2 和 K3s 集群的 Kubernetes API 访问。启用后,ACE 会为生成的 `kubeconfig` 文件配置直接访问下游集群 Endpoint,从而绕过 Rancher 代理。在可以直接访问下游集群 Kubernetes API 的场景下,可以减少 Rancher 负载。有关更多信息,请参阅[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)配置说明。 ### 减少 Event Handler 执行 diff --git a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md index a760bbebff1..87a74b259c2 100644 --- a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md +++ b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md @@ -66,7 +66,7 @@ You should remove any remaining legacy apps that appear in the Cluster Manager U ### Using the Authorized Cluster Endpoint (ACE) -An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE, RKE2, and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) for more information and configuration instructions. +An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE2 and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) for more information and configuration instructions. ### Reducing Event Handler Executions From 3175160ab40cbdfd0c16a92c4852501e498c5a79 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 14:48:10 -0700 Subject: [PATCH 16/57] Remove RKE1 references in amazon-ec2.md --- .../node-template-configuration/amazon-ec2.md | 2 +- .../node-template-configuration/amazon-ec2.md | 2 +- .../node-template-configuration/amazon-ec2.md | 2 +- .../node-template-configuration/amazon-ec2.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md b/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md index e4b77c5174c..09a4a1a3361 100644 --- a/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md +++ b/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md @@ -38,7 +38,7 @@ Choose the default security group or configure a security group. Please refer to [Amazon EC2 security group when using Node Driver](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group. -If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). +If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-nodes). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). ### Instance Options diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md index 5f222f19325..ae6d9b5dbae 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md @@ -34,7 +34,7 @@ title: EC2 节点模板配置 请参考[使用主机驱动时的 Amazon EC2 安全组](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-安全组),了解 `rancher-nodes` 安全组中创建的规则。 -如果你自行为 EC2 实例提供安全组,Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rke-上-rancher-server-节点的端口)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。 +如果你自行为 EC2 实例提供安全组,Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-节点)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。 ## 实例选项 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md index 5f222f19325..ae6d9b5dbae 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md @@ -34,7 +34,7 @@ title: EC2 节点模板配置 请参考[使用主机驱动时的 Amazon EC2 安全组](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-安全组),了解 `rancher-nodes` 安全组中创建的规则。 -如果你自行为 EC2 实例提供安全组,Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rke-上-rancher-server-节点的端口)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。 +如果你自行为 EC2 实例提供安全组,Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-节点)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。 ## 实例选项 diff --git a/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md b/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md index e4b77c5174c..09a4a1a3361 100644 --- a/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md +++ b/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md @@ -38,7 +38,7 @@ Choose the default security group or configure a security group. Please refer to [Amazon EC2 security group when using Node Driver](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group. -If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). +If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-nodes). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). ### Instance Options From bc487e481b4cb0f2dc46a6fd40156418e892c491 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 14:57:37 -0700 Subject: [PATCH 17/57] Remove RKE1 references in rancher-server-configuration.md --- .../rancher-server-configuration/rancher-server-configuration.md | 1 - .../rancher-server-configuration/rancher-server-configuration.md | 1 - .../rancher-server-configuration/rancher-server-configuration.md | 1 - .../rancher-server-configuration/rancher-server-configuration.md | 1 - 4 files changed, 4 deletions(-) diff --git a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md index 0e2aa590833..82be0707c8a 100644 --- a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md +++ b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md @@ -6,7 +6,6 @@ title: Rancher Server Configuration -- [RKE1 Cluster Configuration](rke1-cluster-configuration.md) - [RKE2 Cluster Configuration](rke2-cluster-configuration.md) - [K3s Cluster Configuration](k3s-cluster-configuration.md) - [EKS Cluster Configuration](eks-cluster-configuration.md) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md index 4efbf1bb06b..9d42590068e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md @@ -6,7 +6,6 @@ title: Rancher Server 配置 -- [RKE1 集群配置](rke1-cluster-configuration.md) - [RKE2 集群配置](rke2-cluster-configuration.md) - [K3s 集群配置](k3s-cluster-configuration.md) - [EKS 集群配置](eks-cluster-configuration.md) diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md index 4efbf1bb06b..9d42590068e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md @@ -6,7 +6,6 @@ title: Rancher Server 配置 -- [RKE1 集群配置](rke1-cluster-configuration.md) - [RKE2 集群配置](rke2-cluster-configuration.md) - [K3s 集群配置](k3s-cluster-configuration.md) - [EKS 集群配置](eks-cluster-configuration.md) diff --git a/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md b/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md index 0e2aa590833..82be0707c8a 100644 --- a/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md +++ b/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration.md @@ -6,7 +6,6 @@ title: Rancher Server Configuration -- [RKE1 Cluster Configuration](rke1-cluster-configuration.md) - [RKE2 Cluster Configuration](rke2-cluster-configuration.md) - [K3s Cluster Configuration](k3s-cluster-configuration.md) - [EKS Cluster Configuration](eks-cluster-configuration.md) From b2acf410b6af1e399e2188a3c1e68481efb71c61 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 15:23:14 -0700 Subject: [PATCH 18/57] Remove rke1-cluster-configuration.md --- .../rke1-cluster-configuration.md | 365 ------------------ .../rke1-cluster-configuration.md | 357 ----------------- .../rke1-cluster-configuration.md | 357 ----------------- .../rke1-cluster-configuration.md | 365 ------------------ 4 files changed, 1444 deletions(-) delete mode 100644 docs/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md delete mode 100644 versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md diff --git a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md deleted file mode 100644 index 7954c8a3b11..00000000000 --- a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md +++ /dev/null @@ -1,365 +0,0 @@ ---- -title: RKE Cluster Configuration Reference ---- - - - - - - - -When Rancher installs Kubernetes, it uses [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) or [RKE2](https://docs.rke2.io/) as the Kubernetes distribution. - -This section covers the configuration options that are available in Rancher for a new or existing RKE Kubernetes cluster. - - -## Overview - -You can configure the Kubernetes options one of two ways: - -- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster. -- [Cluster Config File](#rke-cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. - -The RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the section about the [cluster config file.](#rke-cluster-config-file-reference) - -In [clusters launched by RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md), you can edit any of the remaining options that follow. - -For an example of RKE config file syntax, see the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/). - -The forms in the Rancher UI don't include all advanced options for configuring RKE. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) - -## Editing Clusters with a Form in the Rancher UI - -To edit your cluster, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. Go to the cluster you want to configure and click **⋮ > Edit Config**. - - -## Editing Clusters with YAML - -Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. - -RKE clusters (also called RKE1 clusters) are edited differently than RKE2 and K3s clusters. - -To edit an RKE config file directly from the Rancher UI, - -1. Click **☰ > Cluster Management**. -1. Go to the RKE cluster you want to configure. Click and click **⋮ > Edit Config**. This take you to the RKE configuration form. Note: Because cluster provisioning changed in Rancher 2.6, the **⋮ > Edit as YAML** can be used for configuring RKE2 clusters, but it can't be used for editing RKE1 configuration. -1. In the configuration form, scroll down and click **Edit as YAML**. -1. Edit the RKE options under the `rancher_kubernetes_engine_config` directive. - -## Configuration Options in the Rancher UI - -:::tip - -Some advanced configuration options are not exposed in the Rancher UI forms, but they can be enabled by editing the RKE cluster configuration file in YAML. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) - -::: - -### Kubernetes Version - -The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube). - -For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md). - -### Network Provider - -The [Network Provider](https://kubernetes.io/docs/concepts/cluster-administration/networking/) that the cluster uses. For more details on the different networking providers, please view our [Networking FAQ](../../../faq/container-network-interface-providers.md). - -:::caution - -After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn't allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications. - -::: - -Out of the box, Rancher is compatible with the following network providers: - -- [Canal](https://github.com/projectcalico/canal) -- [Flannel](https://github.com/coreos/flannel#flannel) -- [Calico](https://docs.projectcalico.org/v3.11/introduction/) -- [Weave](https://github.com/weaveworks/weave) - - - -:::note Notes on Weave: - -When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a [Config File](#rke-cluster-config-file-reference) and the [Weave Network Plug-in Options](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options). - -::: - -### Project Network Isolation - -If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication. - -Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. - -### Kubernetes Cloud Providers - -You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider. - -:::note - -If the cloud provider you want to use is not listed as an option, you will need to use the [config file option](#rke-cluster-config-file-reference) to configure the cloud provider. Please reference the [RKE cloud provider documentation](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/) on how to configure the cloud provider. - -::: - -### Private Registries - -The cluster-level private registry configuration is only used for provisioning clusters. - -There are two main ways to set up private registries in Rancher: by setting up the [global default registry](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md) through the **Settings** tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials. - -If your private registry requires credentials, you need to pass the credentials to Rancher by editing the cluster options for each cluster that needs to pull images from the registry. - -The private registry configuration option tells Rancher where to pull the [system images](https://rancher.com/docs/rke/latest/en/config-options/system-images/) or [addon images](https://rancher.com/docs/rke/latest/en/config-options/add-ons/) that will be used in your cluster. - -- **System images** are components needed to maintain the Kubernetes cluster. -- **Add-ons** are used to deploy several cluster components, including network plug-ins, the ingress controller, the DNS provider, or the metrics server. - -For more information on setting up a private registry for components applied during the provisioning of the cluster, see the [RKE documentation on private registries](https://rancher.com/docs/rke/latest/en/config-options/private-registries/). - -Rancher v2.6 introduced the ability to configure [ECR registries for RKE clusters](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup). - -### Authorized Cluster Endpoint - -Authorized Cluster Endpoint (ACE) can be used to directly access the Kubernetes API server, without requiring communication through Rancher. - -:::note - -ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS. - -::: - -ACE must be set up [manually](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#authorized-cluster-endpoint-support-for-rke2-and-k3s-clusters) on RKE2 and K3s clusters. In RKE, ACE is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the `controlplane` role and the default Kubernetes self-signed certificates. - -For more detail on how an authorized cluster endpoint works and why it is used, refer to the [architecture section.](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) - -We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace) - -### Node Pools - -For information on using the Rancher UI to set up node pools in an RKE cluster, refer to [this page.](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md) - -### NGINX Ingress - -If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster. - -### Metrics Server Monitoring - -Option to enable or disable [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/). - -Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal. - -You must have an existing Pod Security Policy configured before you can use this option. - -### Docker Version on Nodes - -Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support. - -If you choose to require a supported Docker version, Rancher will stop pods from running on nodes that don't have a supported Docker version installed. - -For details on which Docker versions were tested with each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) - -### Docker Root Directory - -If the nodes you are adding to the cluster have Docker configured with a non-default Docker Root Directory (default is `/var/lib/docker`), specify the correct Docker Root Directory in this option. - -### Default Pod Security Policy - -If you enable **Pod Security Policy Support**, use this drop-down to choose the pod security policy that's applied to the cluster. - -### Node Port Range - -Option to change the range of ports that can be used for [NodePort services](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). Default is `30000-32767`. - -### Recurring etcd Snapshots - -Option to enable or disable [recurring etcd snapshots](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots). - -### Agent Environment Variables - -Option to set environment variables for [rancher agents](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables. - -### Updating ingress-nginx - -Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`. - -If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment. - -### Cluster Agent Configuration and Fleet Agent Configuration - -You can configure the scheduling fields and resource limits for the Cluster Agent and the cluster's Fleet Agent. You can use these fields to customize tolerations, affinity rules, and resource requirements. Additional tolerations are appended to a list of default tolerations and control plane node taints. If you define custom affinity rules, they override the global default affinity setting. Defining resource requirements sets requests or limits where there previously were none. - -:::note - -With this option, it's possible to override or remove rules that are required for the functioning of the cluster. We strongly recommend against removing or overriding these and any other affinity rules, as this may cause unwanted side effects: - -- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` for `cattle-cluster-agent` -- `cluster-agent-default-affinity` for `cattle-cluster-agent` -- `fleet-agent-default-affinity` for `fleet-agent` - -::: - -If you downgrade Rancher to v2.7.4 or below, your changes will be lost and the agents will re-deploy without your customizations. The Fleet agent will fallback to using its built-in default values when it re-deploys. If the Fleet version doesn't change during the downgrade, the re-deploy won't be immediate. - - -## RKE Cluster Config File Reference - -Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the [options available](https://rancher.com/docs/rke/latest/en/config-options/) in an RKE installation, except for `system_images` configuration. The `system_images` option is not supported when creating a cluster with the Rancher UI or API. - -For the complete reference for configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) - -### Config File Structure in Rancher - -RKE (Rancher Kubernetes Engine) is the tool that Rancher uses to provision Kubernetes clusters. Rancher's cluster config files used to have the same structure as [RKE config files,](https://rancher.com/docs/rke/latest/en/example-yamls/) but the structure changed so that in Rancher, RKE cluster config items are separated from non-RKE config items. Therefore, configuration for your cluster needs to be nested under the `rancher_kubernetes_engine_config` directive in the cluster config file. Cluster config files created with earlier versions of Rancher will need to be updated for this format. An example cluster config file is included below. - -
- Example Cluster Config File - -```yaml -# -# Cluster Config -# -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: false -local_cluster_auth_endpoint: - enabled: true -# -# Rancher Config -# -rancher_kubernetes_engine_config: # Your RKE template config goes here. - addon_job_timeout: 30 - authentication: - strategy: x509 - ignore_docker_version: true -# -# # Currently only nginx ingress provider is supported. -# # To disable ingress controller, set `provider: none` -# # To enable ingress on specific nodes, use the node_selector, eg: -# provider: nginx -# node_selector: -# app: ingress -# - ingress: - provider: nginx - kubernetes_version: v1.15.3-rancher3-1 - monitoring: - provider: metrics-server -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - options: - flannel_backend_type: vxlan - plugin: canal -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: true - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - election-timeout: 5000 - heartbeat-interval: 500 - gid: 0 - retention: 72h - snapshot: false - uid: 0 - kube_api: - always_pull_images: false - pod_security_policy: false - service_node_port_range: 30000-32767 - ssh_agent_auth: false -windows_prefered_cluster: false -``` -
- -### Default DNS provider - -The table below indicates what DNS provider is deployed by default. See [RKE documentation on DNS provider](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/) for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher. - -| Rancher version | Kubernetes version | Default DNS provider | -|-------------|--------------------|----------------------| -| v2.2.5 and higher | v1.14.0 and higher | CoreDNS | -| v2.2.5 and higher | v1.13.x and lower | kube-dns | -| v2.2.4 and lower | any | kube-dns | - -## Rancher Specific Parameters in YAML - -Besides the RKE config file options, there are also Rancher specific settings that can be configured in the Config File (YAML): - -### docker_root_dir - -See [Docker Root Directory](#docker-root-directory). - -### enable_cluster_monitoring - -Option to enable or disable [Cluster Monitoring](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md). - -### enable_network_policy - -Option to enable or disable Project Network Isolation. - -Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. - -### local_cluster_auth_endpoint - -See [Authorized Cluster Endpoint](#authorized-cluster-endpoint). - -Example: - -```yaml -local_cluster_auth_endpoint: - enabled: true - fqdn: "FQDN" - ca_certs: |- - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- -``` - -### Custom Network Plug-in - -You can add a custom network plug-in by using the [user-defined add-on functionality](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/) of RKE. You define any add-on that you want deployed after the Kubernetes cluster is deployed. - -There are two ways that you can specify an add-on: - -- [In-line Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons) -- [Referencing YAML Files for Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons) - -For an example of how to configure a custom network plug-in by editing the `cluster.yml`, refer to the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example) \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md deleted file mode 100644 index 043a6b43cc9..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md +++ /dev/null @@ -1,357 +0,0 @@ ---- -title: RKE 集群配置参考 ---- - - - -Rancher 安装 Kubernetes 时,它使用 [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 或 [RKE2](https://docs.rke2.io/) 作为 Kubernetes 发行版。 - -本文介绍 Rancher 中可用于新的或现有的 RKE Kubernetes 集群的配置选项。 - - -## 概述 - -你可以通过以下两种方式之一来配置 Kubernetes 选项: - -- [Rancher UI](#rancher-ui-中的配置选项):使用 Rancher UI 来选择设置 Kubernetes 集群时常用的自定义选项。 -- [集群配置文件](#rke-集群配置文件参考):高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你使用 YAML 来指定 RKE 安装中可用的任何选项(除了 system_images 配置)。 - -RKE 集群配置选项嵌套在 `rancher_kubernetes_engine_config` 参数下。有关详细信息,请参阅[集群配置文件](#rke-集群配置文件参考)。 - -在 [RKE 启动的集群](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md)中,你可以编辑任何后续剩余的选项。 - -有关 RKE 配置文件语法的示例,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/example-yamls/)。 - -Rancher UI 中的表单不包括配置 RKE 的所有高级选项。有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 - -## 在 Rancher UI 中使用表单编辑集群 - -要编辑你的集群: - -1. 在左上角,单击 **☰ > 集群管理**。 -1. 转到要配置的集群,然后单击 **⋮ > 编辑配置**。 - - -## 使用 YAML 编辑集群 - -高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你使用 YAML 来指定 RKE 安装中可用的任何选项(除了 system_images 配置)。 - -RKE 集群(也称为 RKE1 集群)的编辑方式与 RKE2 和 K3s 集群不同。 - -要直接从 Rancher UI 编辑 RKE 配置文件: - -1. 点击 **☰ > 集群管理**。 -1. 转到要配置的 RKE 集群。单击并单击 **⋮ > 编辑配置**。你将会转到 RKE 配置表单。请注意,由于集群配置在 Rancher 2.6 中发生了变更,**⋮ > 以 YAML 文件编辑**可用于配置 RKE2 集群,但不能用于编辑 RKE1 配置。 -1. 在配置表单中,向下滚动并单击**以 YAML 文件编辑**。 -1. 编辑 `rancher_kubernetes_engine_config` 参数下的 RKE 选项。 - -## Rancher UI 中的配置选项 - -:::tip - -一些高级配置选项没有在 Rancher UI 表单中开放,但你可以通过在 YAML 中编辑 RKE 集群配置文件来启用这些选项。有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 - -::: - -### Kubernetes 版本 - -这指的是集群节点上安装的 Kubernetes 版本。Rancher 基于 [hyperkube](https://github.com/rancher/hyperkube) 打包了自己的 Kubernetes 版本。 - -有关更多详细信息,请参阅[升级 Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)。 - -### 网络提供商 - -这指的是集群使用的[网络提供商](https://kubernetes.io/docs/concepts/cluster-administration/networking/)。有关不同网络提供商的更多详细信息,请查看我们的[网络常见问题解答](../../../faq/container-network-interface-providers.md)。 - -:::caution - -启动集群后,你无法更改网络提供商。由于 Kubernetes 不允许在网络提供商之间切换,因此,请谨慎选择要使用的网络提供商。使用网络提供商创建集群后,如果你需要更改网络提供商,你将需要拆除整个集群以及其中的所有应用。 - -::: - -Rancher 与以下开箱即用的网络提供商兼容: - -- [Canal](https://github.com/projectcalico/canal) -- [Flannel](https://github.com/coreos/flannel#flannel) -- [Calico](https://docs.projectcalico.org/v3.11/introduction/) -- [Weave](https://github.com/weaveworks/weave) - -:::note Weave 注意事项: - -选择 Weave 作为网络提供商时,Rancher 将通过生成随机密码来自动启用加密。如果你想手动指定密码,请参阅使用[配置文件](#rke-集群配置文件参考)和 [Weave 网络插件选项](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options)来配置集群。 - -::: - -### 项目网络隔离 - -如果你的网络提供商允许项目网络隔离,你可以选择启用或禁用项目间的通信。 - -如果你使用支持执行 Kubernetes 网络策略的 RKE 网络插件(例如 Canal 或 Cisco ACI 插件),则可以使用项目网络隔离。 - -### Kubernetes 云提供商 - -你可以配置 [Kubernetes 云提供商](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md)。如果你想在 Kubernetes 中使用动态配置的[卷和存储](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md),你通常需要选择特定的云提供商。例如,如果你想使用 Amazon EBS,则需要选择 `aws` 云提供商。 - -:::note - -如果你要使用的云提供商未作为选项列出,你需要使用[配置文件选项](#rke-集群配置文件参考)来配置云提供商。请参考 [RKE 云提供商文档](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/)来了解如何配置云提供商。 - -::: - -### 私有镜像仓库 - -集群级别的私有镜像仓库配置仅能用于配置集群。 - -在 Rancher 中设置私有镜像仓库的主要方法有两种:通过[全局默认镜像仓库](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md)中的**设置**选项卡设置全局默认镜像仓库,以及在集群级别设置的高级选项中设置私有镜像仓库。全局默认镜像仓库可以用于离线设置,不需要凭证的镜像仓库。而集群级私有镜像仓库用于所有需要凭证的私有镜像仓库。 - -如果你的私有镜像仓库需要凭证,为了将凭证传递给 Rancher,你需要编辑每个需要从仓库中拉取镜像的集群的集群选项。 - -私有镜像仓库的配置选项能让 Rancher 知道要从哪里拉取用于集群的[系统镜像](https://rancher.com/docs/rke/latest/en/config-options/system-images/)或[附加组件镜像](https://rancher.com/docs/rke/latest/en/config-options/add-ons/)。 - -- **系统镜像**是维护 Kubernetes 集群所需的组件。 -- **附加组件**用于部署多个集群组件,包括网络插件、ingress controller、DNS 提供商或 metrics server。 - -有关为集群配置期间应用的组件设置私有镜像仓库的更多信息,请参阅[私有镜像仓库的 RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/private-registries/)。 - -Rancher v2.6 引入了[为 RKE 集群配置 ECR 镜像仓库](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup)的功能。 - -### 授权集群端点 - -授权集群端点(ACE)可用于直接访问 Kubernetes API server,而无需通过 Rancher 进行通信。 - -:::note - -授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即只适用于 Rancher [使用 RKE](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#配置-kubernetes-集群的工具) 来配置的集群。它不适用于托管在 Kubernetes 提供商中的集群,例如 Amazon 的 EKS。 - -::: - -在 Rancher 启动的 Kubernetes 集群中,它默认启用,使用具有 `controlplane` 角色的节点的 IP 和默认的 Kubernetes 自签名证书。 - -有关授权集群端点的工作原理以及使用的原因,请参阅[架构介绍](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)。 - -我们建议使用具有授权集群端点的负载均衡器。有关详细信息,请参阅[推荐的架构](../../rancher-manager-architecture/architecture-recommendations.md#授权集群端点架构)。 - -### 节点池 - -有关使用 Rancher UI 在 RKE 集群中设置节点池的信息,请参阅[此页面](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md)。 - -### NGINX Ingress - -如果你想使用高可用性配置来发布应用,并且你使用没有原生负载均衡功能的云提供商来托管主机,请启用此选项以在集群中使用 NGINX Ingress。 - -### Metrics Server 监控 - -这是启用或禁用 [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/) 的选项。 - -每个能够使用 RKE 启动集群的云提供商都可以收集指标并监控你的集群节点。如果启用此选项,你可以从你的云提供商门户查看你的节点指标。 - -### 节点上的 Docker 版本 - -表示是否允许节点运行 Rancher 不正式支持的 Docker 版本。 - -如果你选择使用支持的 Docker 版本,Rancher 会禁止 pod 运行在安装了不支持的 Docker 版本的节点上。 - -如需了解各个 Rancher 版本通过了哪些 Docker 版本测试,请参见[支持和维护条款](https://rancher.com/support-maintenance-terms/)。 - -### Docker 根目录 - -如果要添加到集群的节点为 Docker 配置了非默认 Docker 根目录(默认为 `/var/lib/docker`),请在此选项中指定正确的 Docker 根目录。 - -### 默认 Pod 安全策略 - -如果你启用了 **Pod 安全策略支持**,请使用此下拉菜单选择应用于集群的 pod 安全策略。 - -### 节点端口范围 - -更改可用于 [NodePort 服务](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)的端口范围的选项。默认为 `30000-32767`。 - -### 定期 etcd 快照 - -启用或禁用[定期 etcd 快照](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots)的选项。 - -### Agent 环境变量 - -为 [rancher agent](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md) 设置环境变量的选项。你可以使用键值对设置环境变量。如果 Rancher Agent 需要使用代理与 Rancher Server 通信,则可以使用 Agent 环境变量设置 `HTTP_PROXY`,`HTTPS_PROXY` 和 `NO_PROXY` 环境变量。 - -### 更新 ingress-nginx - -使用 Kubernetes 1.16 之前版本创建的集群将具有 `OnDelete`的 `ingress-nginx` `updateStrategy`。使用 Kubernetes 1.16 或更高版本创建的集群将具有 `RollingUpdate`。 - -如果 `ingress-nginx` 的 `updateStrategy` 是 `OnDelete`,则需要删除这些 pod 以获得 deployment 正确的版本。 - -### Cluster Agent 配置和 Fleet Agent 配置 - -你可以为 Cluster Agent 和集群的 Fleet Agent 配置调度字段和资源限制。你可以使用这些字段来自定义容忍度、亲和性规则和资源要求。其他容忍度会被尾附到默认容忍度和 Control Plane 节点污点的列表中。如果你定义了自定义亲和性规则,它们将覆盖全局默认亲和性设置。定义资源要求会在以前没有的地方设置请求或限制。 - -:::note - -有了这个选项,你可以覆盖或删除运行集群所需的规则。我们强烈建议你不要删除或覆盖这些规则和其他亲和性规则,因为这可能会导致不必要的影响: - -- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` 用于 `cattle-cluster-agent` -- `cluster-agent-default-affinity` 用于 `cattle-cluster-agent` -- `fleet-agent-default-affinity` 用于 `fleet-agent` - -::: - -如果将 Rancher 降级到 v2.7.4 或更低版本,你的更改将丢失,而且 Agent 将在没有你的自定义设置的情况下重新部署。重新部署时,Fleet Agent 将回退到使用内置默认值。如果降级期间 Fleet 版本没有更改,则不会立即重新部署。 - - -## RKE 集群配置文件参考 - -高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你在 RKE 安装中设置任何[可用选项](https://rancher.com/docs/rke/latest/en/config-options/)(`system_images` 配置除外)。使用 Rancher UI 或 API 创建集群时,不支持 `system_images` 选项。 - -有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 - -### Rancher 中的配置文件结构 - -RKE(Rancher Kubernetes Engine)是 Rancher 用来配置 Kubernetes 集群的工具。过去,Rancher 的集群配置文件与 [RKE 配置文件](https://rancher.com/docs/rke/latest/en/example-yamls/)的结构是一致的。但由于 Rancher 文件结构发生了变化,因此在 Rancher 中,RKE 集群配置项与非 RKE 配置项是分开的。所以,你的集群配置需要嵌套在集群配置文件中的 `rancher_kubernetes_engine_config` 参数下。使用早期版本的 Rancher 创建的集群配置文件需要针对这种格式进行更新。以下是一个集群配置文件示例: - -
- 集群配置文件示例 - -```yaml -# -# Cluster Config -# -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: false -local_cluster_auth_endpoint: - enabled: true -# -# Rancher Config -# -rancher_kubernetes_engine_config: # Your RKE template config goes here. - addon_job_timeout: 30 - authentication: - strategy: x509 - ignore_docker_version: true -# -# # 目前仅支持 Nginx ingress provider -# # 要禁用 Ingress controller,设置 `provider: none` -# # 要在指定节点上禁用 Ingress,使用 node_selector,例如: -# provider: nginx -# node_selector: -# app: ingress -# - ingress: - provider: nginx - kubernetes_version: v1.15.3-rancher3-1 - monitoring: - provider: metrics-server -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - options: - flannel_backend_type: vxlan - plugin: canal -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: true - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - election-timeout: 5000 - heartbeat-interval: 500 - gid: 0 - retention: 72h - snapshot: false - uid: 0 - kube_api: - always_pull_images: false - pod_security_policy: false - service_node_port_range: 30000-32767 - ssh_agent_auth: false -windows_prefered_cluster: false -``` -
- -### 默认 DNS 提供商 - -下表显示了默认部署的 DNS 提供商。有关如何配置不同 DNS 提供商的更多信息,请参阅 [DNS 提供商相关的 RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/)。CoreDNS 只能在 Kubernetes v1.12.0 及更高版本上使用。 - -| Rancher 版本 | Kubernetes 版本 | 默认 DNS 提供商 | -|-------------|--------------------|----------------------| -| v2.2.5 及更高版本 | v1.14.0 及更高版本 | CoreDNS | -| v2.2.5 及更高版本 | v1.13.x 及更低版本 | kube-dns | -| v2.2.4 及更低版本 | 任意 | kube-dns | - -## YAML 中的 Rancher 特定参数 - -除了 RKE 配置文件选项外,还有可以在配置文件 (YAML) 中配置的 Rancher 特定设置如下。 - -### docker_root_dir - -请参阅 [Docker 根目录](#docker-根目录)。 - -### enable_cluster_monitoring - -启用或禁用[集群监控](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md)的选项。 - -### enable_network_policy - -启用或禁用项目网络隔离的选项。 - -如果你使用支持执行 Kubernetes 网络策略的 RKE 网络插件(例如 Canal 或 Cisco ACI 插件),则可以使用项目网络隔离。 - -### local_cluster_auth_endpoint - -请参阅[授权集群端点](#授权集群端点)。 - -示例: - -```yaml -local_cluster_auth_endpoint: - enabled: true - fqdn: "FQDN" - ca_certs: |- - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- -``` - -### 自定义网络插件 - -你可以使用 RKE 的[用户定义的附加组件功能](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/)来添加自定义网络插件。部署 Kubernetes 集群之后,你可以定义要部署的任何附加组件。 - -有两种方法可以指定附加组件: - -- [内嵌附加组件](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons) -- [为附加组件引用 YAML 文件](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons) - -有关如何通过编辑 `cluster.yml` 来配置自定义网络插件的示例,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md deleted file mode 100644 index 043a6b43cc9..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md +++ /dev/null @@ -1,357 +0,0 @@ ---- -title: RKE 集群配置参考 ---- - - - -Rancher 安装 Kubernetes 时,它使用 [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 或 [RKE2](https://docs.rke2.io/) 作为 Kubernetes 发行版。 - -本文介绍 Rancher 中可用于新的或现有的 RKE Kubernetes 集群的配置选项。 - - -## 概述 - -你可以通过以下两种方式之一来配置 Kubernetes 选项: - -- [Rancher UI](#rancher-ui-中的配置选项):使用 Rancher UI 来选择设置 Kubernetes 集群时常用的自定义选项。 -- [集群配置文件](#rke-集群配置文件参考):高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你使用 YAML 来指定 RKE 安装中可用的任何选项(除了 system_images 配置)。 - -RKE 集群配置选项嵌套在 `rancher_kubernetes_engine_config` 参数下。有关详细信息,请参阅[集群配置文件](#rke-集群配置文件参考)。 - -在 [RKE 启动的集群](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md)中,你可以编辑任何后续剩余的选项。 - -有关 RKE 配置文件语法的示例,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/example-yamls/)。 - -Rancher UI 中的表单不包括配置 RKE 的所有高级选项。有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 - -## 在 Rancher UI 中使用表单编辑集群 - -要编辑你的集群: - -1. 在左上角,单击 **☰ > 集群管理**。 -1. 转到要配置的集群,然后单击 **⋮ > 编辑配置**。 - - -## 使用 YAML 编辑集群 - -高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你使用 YAML 来指定 RKE 安装中可用的任何选项(除了 system_images 配置)。 - -RKE 集群(也称为 RKE1 集群)的编辑方式与 RKE2 和 K3s 集群不同。 - -要直接从 Rancher UI 编辑 RKE 配置文件: - -1. 点击 **☰ > 集群管理**。 -1. 转到要配置的 RKE 集群。单击并单击 **⋮ > 编辑配置**。你将会转到 RKE 配置表单。请注意,由于集群配置在 Rancher 2.6 中发生了变更,**⋮ > 以 YAML 文件编辑**可用于配置 RKE2 集群,但不能用于编辑 RKE1 配置。 -1. 在配置表单中,向下滚动并单击**以 YAML 文件编辑**。 -1. 编辑 `rancher_kubernetes_engine_config` 参数下的 RKE 选项。 - -## Rancher UI 中的配置选项 - -:::tip - -一些高级配置选项没有在 Rancher UI 表单中开放,但你可以通过在 YAML 中编辑 RKE 集群配置文件来启用这些选项。有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 - -::: - -### Kubernetes 版本 - -这指的是集群节点上安装的 Kubernetes 版本。Rancher 基于 [hyperkube](https://github.com/rancher/hyperkube) 打包了自己的 Kubernetes 版本。 - -有关更多详细信息,请参阅[升级 Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)。 - -### 网络提供商 - -这指的是集群使用的[网络提供商](https://kubernetes.io/docs/concepts/cluster-administration/networking/)。有关不同网络提供商的更多详细信息,请查看我们的[网络常见问题解答](../../../faq/container-network-interface-providers.md)。 - -:::caution - -启动集群后,你无法更改网络提供商。由于 Kubernetes 不允许在网络提供商之间切换,因此,请谨慎选择要使用的网络提供商。使用网络提供商创建集群后,如果你需要更改网络提供商,你将需要拆除整个集群以及其中的所有应用。 - -::: - -Rancher 与以下开箱即用的网络提供商兼容: - -- [Canal](https://github.com/projectcalico/canal) -- [Flannel](https://github.com/coreos/flannel#flannel) -- [Calico](https://docs.projectcalico.org/v3.11/introduction/) -- [Weave](https://github.com/weaveworks/weave) - -:::note Weave 注意事项: - -选择 Weave 作为网络提供商时,Rancher 将通过生成随机密码来自动启用加密。如果你想手动指定密码,请参阅使用[配置文件](#rke-集群配置文件参考)和 [Weave 网络插件选项](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options)来配置集群。 - -::: - -### 项目网络隔离 - -如果你的网络提供商允许项目网络隔离,你可以选择启用或禁用项目间的通信。 - -如果你使用支持执行 Kubernetes 网络策略的 RKE 网络插件(例如 Canal 或 Cisco ACI 插件),则可以使用项目网络隔离。 - -### Kubernetes 云提供商 - -你可以配置 [Kubernetes 云提供商](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md)。如果你想在 Kubernetes 中使用动态配置的[卷和存储](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md),你通常需要选择特定的云提供商。例如,如果你想使用 Amazon EBS,则需要选择 `aws` 云提供商。 - -:::note - -如果你要使用的云提供商未作为选项列出,你需要使用[配置文件选项](#rke-集群配置文件参考)来配置云提供商。请参考 [RKE 云提供商文档](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/)来了解如何配置云提供商。 - -::: - -### 私有镜像仓库 - -集群级别的私有镜像仓库配置仅能用于配置集群。 - -在 Rancher 中设置私有镜像仓库的主要方法有两种:通过[全局默认镜像仓库](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md)中的**设置**选项卡设置全局默认镜像仓库,以及在集群级别设置的高级选项中设置私有镜像仓库。全局默认镜像仓库可以用于离线设置,不需要凭证的镜像仓库。而集群级私有镜像仓库用于所有需要凭证的私有镜像仓库。 - -如果你的私有镜像仓库需要凭证,为了将凭证传递给 Rancher,你需要编辑每个需要从仓库中拉取镜像的集群的集群选项。 - -私有镜像仓库的配置选项能让 Rancher 知道要从哪里拉取用于集群的[系统镜像](https://rancher.com/docs/rke/latest/en/config-options/system-images/)或[附加组件镜像](https://rancher.com/docs/rke/latest/en/config-options/add-ons/)。 - -- **系统镜像**是维护 Kubernetes 集群所需的组件。 -- **附加组件**用于部署多个集群组件,包括网络插件、ingress controller、DNS 提供商或 metrics server。 - -有关为集群配置期间应用的组件设置私有镜像仓库的更多信息,请参阅[私有镜像仓库的 RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/private-registries/)。 - -Rancher v2.6 引入了[为 RKE 集群配置 ECR 镜像仓库](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup)的功能。 - -### 授权集群端点 - -授权集群端点(ACE)可用于直接访问 Kubernetes API server,而无需通过 Rancher 进行通信。 - -:::note - -授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即只适用于 Rancher [使用 RKE](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#配置-kubernetes-集群的工具) 来配置的集群。它不适用于托管在 Kubernetes 提供商中的集群,例如 Amazon 的 EKS。 - -::: - -在 Rancher 启动的 Kubernetes 集群中,它默认启用,使用具有 `controlplane` 角色的节点的 IP 和默认的 Kubernetes 自签名证书。 - -有关授权集群端点的工作原理以及使用的原因,请参阅[架构介绍](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)。 - -我们建议使用具有授权集群端点的负载均衡器。有关详细信息,请参阅[推荐的架构](../../rancher-manager-architecture/architecture-recommendations.md#授权集群端点架构)。 - -### 节点池 - -有关使用 Rancher UI 在 RKE 集群中设置节点池的信息,请参阅[此页面](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md)。 - -### NGINX Ingress - -如果你想使用高可用性配置来发布应用,并且你使用没有原生负载均衡功能的云提供商来托管主机,请启用此选项以在集群中使用 NGINX Ingress。 - -### Metrics Server 监控 - -这是启用或禁用 [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/) 的选项。 - -每个能够使用 RKE 启动集群的云提供商都可以收集指标并监控你的集群节点。如果启用此选项,你可以从你的云提供商门户查看你的节点指标。 - -### 节点上的 Docker 版本 - -表示是否允许节点运行 Rancher 不正式支持的 Docker 版本。 - -如果你选择使用支持的 Docker 版本,Rancher 会禁止 pod 运行在安装了不支持的 Docker 版本的节点上。 - -如需了解各个 Rancher 版本通过了哪些 Docker 版本测试,请参见[支持和维护条款](https://rancher.com/support-maintenance-terms/)。 - -### Docker 根目录 - -如果要添加到集群的节点为 Docker 配置了非默认 Docker 根目录(默认为 `/var/lib/docker`),请在此选项中指定正确的 Docker 根目录。 - -### 默认 Pod 安全策略 - -如果你启用了 **Pod 安全策略支持**,请使用此下拉菜单选择应用于集群的 pod 安全策略。 - -### 节点端口范围 - -更改可用于 [NodePort 服务](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)的端口范围的选项。默认为 `30000-32767`。 - -### 定期 etcd 快照 - -启用或禁用[定期 etcd 快照](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots)的选项。 - -### Agent 环境变量 - -为 [rancher agent](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md) 设置环境变量的选项。你可以使用键值对设置环境变量。如果 Rancher Agent 需要使用代理与 Rancher Server 通信,则可以使用 Agent 环境变量设置 `HTTP_PROXY`,`HTTPS_PROXY` 和 `NO_PROXY` 环境变量。 - -### 更新 ingress-nginx - -使用 Kubernetes 1.16 之前版本创建的集群将具有 `OnDelete`的 `ingress-nginx` `updateStrategy`。使用 Kubernetes 1.16 或更高版本创建的集群将具有 `RollingUpdate`。 - -如果 `ingress-nginx` 的 `updateStrategy` 是 `OnDelete`,则需要删除这些 pod 以获得 deployment 正确的版本。 - -### Cluster Agent 配置和 Fleet Agent 配置 - -你可以为 Cluster Agent 和集群的 Fleet Agent 配置调度字段和资源限制。你可以使用这些字段来自定义容忍度、亲和性规则和资源要求。其他容忍度会被尾附到默认容忍度和 Control Plane 节点污点的列表中。如果你定义了自定义亲和性规则,它们将覆盖全局默认亲和性设置。定义资源要求会在以前没有的地方设置请求或限制。 - -:::note - -有了这个选项,你可以覆盖或删除运行集群所需的规则。我们强烈建议你不要删除或覆盖这些规则和其他亲和性规则,因为这可能会导致不必要的影响: - -- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` 用于 `cattle-cluster-agent` -- `cluster-agent-default-affinity` 用于 `cattle-cluster-agent` -- `fleet-agent-default-affinity` 用于 `fleet-agent` - -::: - -如果将 Rancher 降级到 v2.7.4 或更低版本,你的更改将丢失,而且 Agent 将在没有你的自定义设置的情况下重新部署。重新部署时,Fleet Agent 将回退到使用内置默认值。如果降级期间 Fleet 版本没有更改,则不会立即重新部署。 - - -## RKE 集群配置文件参考 - -高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你在 RKE 安装中设置任何[可用选项](https://rancher.com/docs/rke/latest/en/config-options/)(`system_images` 配置除外)。使用 Rancher UI 或 API 创建集群时,不支持 `system_images` 选项。 - -有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 - -### Rancher 中的配置文件结构 - -RKE(Rancher Kubernetes Engine)是 Rancher 用来配置 Kubernetes 集群的工具。过去,Rancher 的集群配置文件与 [RKE 配置文件](https://rancher.com/docs/rke/latest/en/example-yamls/)的结构是一致的。但由于 Rancher 文件结构发生了变化,因此在 Rancher 中,RKE 集群配置项与非 RKE 配置项是分开的。所以,你的集群配置需要嵌套在集群配置文件中的 `rancher_kubernetes_engine_config` 参数下。使用早期版本的 Rancher 创建的集群配置文件需要针对这种格式进行更新。以下是一个集群配置文件示例: - -
- 集群配置文件示例 - -```yaml -# -# Cluster Config -# -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: false -local_cluster_auth_endpoint: - enabled: true -# -# Rancher Config -# -rancher_kubernetes_engine_config: # Your RKE template config goes here. - addon_job_timeout: 30 - authentication: - strategy: x509 - ignore_docker_version: true -# -# # 目前仅支持 Nginx ingress provider -# # 要禁用 Ingress controller,设置 `provider: none` -# # 要在指定节点上禁用 Ingress,使用 node_selector,例如: -# provider: nginx -# node_selector: -# app: ingress -# - ingress: - provider: nginx - kubernetes_version: v1.15.3-rancher3-1 - monitoring: - provider: metrics-server -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - options: - flannel_backend_type: vxlan - plugin: canal -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: true - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - election-timeout: 5000 - heartbeat-interval: 500 - gid: 0 - retention: 72h - snapshot: false - uid: 0 - kube_api: - always_pull_images: false - pod_security_policy: false - service_node_port_range: 30000-32767 - ssh_agent_auth: false -windows_prefered_cluster: false -``` -
- -### 默认 DNS 提供商 - -下表显示了默认部署的 DNS 提供商。有关如何配置不同 DNS 提供商的更多信息,请参阅 [DNS 提供商相关的 RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/)。CoreDNS 只能在 Kubernetes v1.12.0 及更高版本上使用。 - -| Rancher 版本 | Kubernetes 版本 | 默认 DNS 提供商 | -|-------------|--------------------|----------------------| -| v2.2.5 及更高版本 | v1.14.0 及更高版本 | CoreDNS | -| v2.2.5 及更高版本 | v1.13.x 及更低版本 | kube-dns | -| v2.2.4 及更低版本 | 任意 | kube-dns | - -## YAML 中的 Rancher 特定参数 - -除了 RKE 配置文件选项外,还有可以在配置文件 (YAML) 中配置的 Rancher 特定设置如下。 - -### docker_root_dir - -请参阅 [Docker 根目录](#docker-根目录)。 - -### enable_cluster_monitoring - -启用或禁用[集群监控](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md)的选项。 - -### enable_network_policy - -启用或禁用项目网络隔离的选项。 - -如果你使用支持执行 Kubernetes 网络策略的 RKE 网络插件(例如 Canal 或 Cisco ACI 插件),则可以使用项目网络隔离。 - -### local_cluster_auth_endpoint - -请参阅[授权集群端点](#授权集群端点)。 - -示例: - -```yaml -local_cluster_auth_endpoint: - enabled: true - fqdn: "FQDN" - ca_certs: |- - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- -``` - -### 自定义网络插件 - -你可以使用 RKE 的[用户定义的附加组件功能](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/)来添加自定义网络插件。部署 Kubernetes 集群之后,你可以定义要部署的任何附加组件。 - -有两种方法可以指定附加组件: - -- [内嵌附加组件](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons) -- [为附加组件引用 YAML 文件](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons) - -有关如何通过编辑 `cluster.yml` 来配置自定义网络插件的示例,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example)。 \ No newline at end of file diff --git a/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md b/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md deleted file mode 100644 index 7954c8a3b11..00000000000 --- a/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md +++ /dev/null @@ -1,365 +0,0 @@ ---- -title: RKE Cluster Configuration Reference ---- - - - - - - - -When Rancher installs Kubernetes, it uses [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) or [RKE2](https://docs.rke2.io/) as the Kubernetes distribution. - -This section covers the configuration options that are available in Rancher for a new or existing RKE Kubernetes cluster. - - -## Overview - -You can configure the Kubernetes options one of two ways: - -- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster. -- [Cluster Config File](#rke-cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. - -The RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the section about the [cluster config file.](#rke-cluster-config-file-reference) - -In [clusters launched by RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md), you can edit any of the remaining options that follow. - -For an example of RKE config file syntax, see the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/). - -The forms in the Rancher UI don't include all advanced options for configuring RKE. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) - -## Editing Clusters with a Form in the Rancher UI - -To edit your cluster, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. Go to the cluster you want to configure and click **⋮ > Edit Config**. - - -## Editing Clusters with YAML - -Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. - -RKE clusters (also called RKE1 clusters) are edited differently than RKE2 and K3s clusters. - -To edit an RKE config file directly from the Rancher UI, - -1. Click **☰ > Cluster Management**. -1. Go to the RKE cluster you want to configure. Click and click **⋮ > Edit Config**. This take you to the RKE configuration form. Note: Because cluster provisioning changed in Rancher 2.6, the **⋮ > Edit as YAML** can be used for configuring RKE2 clusters, but it can't be used for editing RKE1 configuration. -1. In the configuration form, scroll down and click **Edit as YAML**. -1. Edit the RKE options under the `rancher_kubernetes_engine_config` directive. - -## Configuration Options in the Rancher UI - -:::tip - -Some advanced configuration options are not exposed in the Rancher UI forms, but they can be enabled by editing the RKE cluster configuration file in YAML. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) - -::: - -### Kubernetes Version - -The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube). - -For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md). - -### Network Provider - -The [Network Provider](https://kubernetes.io/docs/concepts/cluster-administration/networking/) that the cluster uses. For more details on the different networking providers, please view our [Networking FAQ](../../../faq/container-network-interface-providers.md). - -:::caution - -After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn't allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications. - -::: - -Out of the box, Rancher is compatible with the following network providers: - -- [Canal](https://github.com/projectcalico/canal) -- [Flannel](https://github.com/coreos/flannel#flannel) -- [Calico](https://docs.projectcalico.org/v3.11/introduction/) -- [Weave](https://github.com/weaveworks/weave) - - - -:::note Notes on Weave: - -When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a [Config File](#rke-cluster-config-file-reference) and the [Weave Network Plug-in Options](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options). - -::: - -### Project Network Isolation - -If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication. - -Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. - -### Kubernetes Cloud Providers - -You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider. - -:::note - -If the cloud provider you want to use is not listed as an option, you will need to use the [config file option](#rke-cluster-config-file-reference) to configure the cloud provider. Please reference the [RKE cloud provider documentation](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/) on how to configure the cloud provider. - -::: - -### Private Registries - -The cluster-level private registry configuration is only used for provisioning clusters. - -There are two main ways to set up private registries in Rancher: by setting up the [global default registry](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md) through the **Settings** tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials. - -If your private registry requires credentials, you need to pass the credentials to Rancher by editing the cluster options for each cluster that needs to pull images from the registry. - -The private registry configuration option tells Rancher where to pull the [system images](https://rancher.com/docs/rke/latest/en/config-options/system-images/) or [addon images](https://rancher.com/docs/rke/latest/en/config-options/add-ons/) that will be used in your cluster. - -- **System images** are components needed to maintain the Kubernetes cluster. -- **Add-ons** are used to deploy several cluster components, including network plug-ins, the ingress controller, the DNS provider, or the metrics server. - -For more information on setting up a private registry for components applied during the provisioning of the cluster, see the [RKE documentation on private registries](https://rancher.com/docs/rke/latest/en/config-options/private-registries/). - -Rancher v2.6 introduced the ability to configure [ECR registries for RKE clusters](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup). - -### Authorized Cluster Endpoint - -Authorized Cluster Endpoint (ACE) can be used to directly access the Kubernetes API server, without requiring communication through Rancher. - -:::note - -ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS. - -::: - -ACE must be set up [manually](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#authorized-cluster-endpoint-support-for-rke2-and-k3s-clusters) on RKE2 and K3s clusters. In RKE, ACE is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the `controlplane` role and the default Kubernetes self-signed certificates. - -For more detail on how an authorized cluster endpoint works and why it is used, refer to the [architecture section.](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) - -We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace) - -### Node Pools - -For information on using the Rancher UI to set up node pools in an RKE cluster, refer to [this page.](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md) - -### NGINX Ingress - -If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster. - -### Metrics Server Monitoring - -Option to enable or disable [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/). - -Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal. - -You must have an existing Pod Security Policy configured before you can use this option. - -### Docker Version on Nodes - -Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support. - -If you choose to require a supported Docker version, Rancher will stop pods from running on nodes that don't have a supported Docker version installed. - -For details on which Docker versions were tested with each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) - -### Docker Root Directory - -If the nodes you are adding to the cluster have Docker configured with a non-default Docker Root Directory (default is `/var/lib/docker`), specify the correct Docker Root Directory in this option. - -### Default Pod Security Policy - -If you enable **Pod Security Policy Support**, use this drop-down to choose the pod security policy that's applied to the cluster. - -### Node Port Range - -Option to change the range of ports that can be used for [NodePort services](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). Default is `30000-32767`. - -### Recurring etcd Snapshots - -Option to enable or disable [recurring etcd snapshots](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots). - -### Agent Environment Variables - -Option to set environment variables for [rancher agents](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables. - -### Updating ingress-nginx - -Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`. - -If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment. - -### Cluster Agent Configuration and Fleet Agent Configuration - -You can configure the scheduling fields and resource limits for the Cluster Agent and the cluster's Fleet Agent. You can use these fields to customize tolerations, affinity rules, and resource requirements. Additional tolerations are appended to a list of default tolerations and control plane node taints. If you define custom affinity rules, they override the global default affinity setting. Defining resource requirements sets requests or limits where there previously were none. - -:::note - -With this option, it's possible to override or remove rules that are required for the functioning of the cluster. We strongly recommend against removing or overriding these and any other affinity rules, as this may cause unwanted side effects: - -- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` for `cattle-cluster-agent` -- `cluster-agent-default-affinity` for `cattle-cluster-agent` -- `fleet-agent-default-affinity` for `fleet-agent` - -::: - -If you downgrade Rancher to v2.7.4 or below, your changes will be lost and the agents will re-deploy without your customizations. The Fleet agent will fallback to using its built-in default values when it re-deploys. If the Fleet version doesn't change during the downgrade, the re-deploy won't be immediate. - - -## RKE Cluster Config File Reference - -Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the [options available](https://rancher.com/docs/rke/latest/en/config-options/) in an RKE installation, except for `system_images` configuration. The `system_images` option is not supported when creating a cluster with the Rancher UI or API. - -For the complete reference for configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) - -### Config File Structure in Rancher - -RKE (Rancher Kubernetes Engine) is the tool that Rancher uses to provision Kubernetes clusters. Rancher's cluster config files used to have the same structure as [RKE config files,](https://rancher.com/docs/rke/latest/en/example-yamls/) but the structure changed so that in Rancher, RKE cluster config items are separated from non-RKE config items. Therefore, configuration for your cluster needs to be nested under the `rancher_kubernetes_engine_config` directive in the cluster config file. Cluster config files created with earlier versions of Rancher will need to be updated for this format. An example cluster config file is included below. - -
- Example Cluster Config File - -```yaml -# -# Cluster Config -# -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: false -local_cluster_auth_endpoint: - enabled: true -# -# Rancher Config -# -rancher_kubernetes_engine_config: # Your RKE template config goes here. - addon_job_timeout: 30 - authentication: - strategy: x509 - ignore_docker_version: true -# -# # Currently only nginx ingress provider is supported. -# # To disable ingress controller, set `provider: none` -# # To enable ingress on specific nodes, use the node_selector, eg: -# provider: nginx -# node_selector: -# app: ingress -# - ingress: - provider: nginx - kubernetes_version: v1.15.3-rancher3-1 - monitoring: - provider: metrics-server -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - options: - flannel_backend_type: vxlan - plugin: canal -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: true - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - election-timeout: 5000 - heartbeat-interval: 500 - gid: 0 - retention: 72h - snapshot: false - uid: 0 - kube_api: - always_pull_images: false - pod_security_policy: false - service_node_port_range: 30000-32767 - ssh_agent_auth: false -windows_prefered_cluster: false -``` -
- -### Default DNS provider - -The table below indicates what DNS provider is deployed by default. See [RKE documentation on DNS provider](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/) for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher. - -| Rancher version | Kubernetes version | Default DNS provider | -|-------------|--------------------|----------------------| -| v2.2.5 and higher | v1.14.0 and higher | CoreDNS | -| v2.2.5 and higher | v1.13.x and lower | kube-dns | -| v2.2.4 and lower | any | kube-dns | - -## Rancher Specific Parameters in YAML - -Besides the RKE config file options, there are also Rancher specific settings that can be configured in the Config File (YAML): - -### docker_root_dir - -See [Docker Root Directory](#docker-root-directory). - -### enable_cluster_monitoring - -Option to enable or disable [Cluster Monitoring](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md). - -### enable_network_policy - -Option to enable or disable Project Network Isolation. - -Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. - -### local_cluster_auth_endpoint - -See [Authorized Cluster Endpoint](#authorized-cluster-endpoint). - -Example: - -```yaml -local_cluster_auth_endpoint: - enabled: true - fqdn: "FQDN" - ca_certs: |- - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- -``` - -### Custom Network Plug-in - -You can add a custom network plug-in by using the [user-defined add-on functionality](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/) of RKE. You define any add-on that you want deployed after the Kubernetes cluster is deployed. - -There are two ways that you can specify an add-on: - -- [In-line Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons) -- [Referencing YAML Files for Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons) - -For an example of how to configure a custom network plug-in by editing the `cluster.yml`, refer to the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example) \ No newline at end of file From 1ddd8efc06968fb3349bf3c5d0fd410de34e2c81 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 15:26:19 -0700 Subject: [PATCH 19/57] Remove RKE1 references in cluster-configuration.md --- .../cluster-configuration/cluster-configuration.md | 1 - .../cluster-configuration/cluster-configuration.md | 1 - .../cluster-configuration/cluster-configuration.md | 1 - .../cluster-configuration/cluster-configuration.md | 1 - 4 files changed, 4 deletions(-) diff --git a/docs/reference-guides/cluster-configuration/cluster-configuration.md b/docs/reference-guides/cluster-configuration/cluster-configuration.md index 8abd3377435..75bae23a493 100644 --- a/docs/reference-guides/cluster-configuration/cluster-configuration.md +++ b/docs/reference-guides/cluster-configuration/cluster-configuration.md @@ -14,7 +14,6 @@ For information on editing cluster membership, go to [this page.](../../how-to-g The cluster configuration options depend on the type of Kubernetes cluster: -- [RKE Cluster Configuration](rancher-server-configuration/rke1-cluster-configuration.md) - [RKE2 Cluster Configuration](rancher-server-configuration/rke2-cluster-configuration.md) - [K3s Cluster Configuration](rancher-server-configuration/k3s-cluster-configuration.md) - [EKS Cluster Configuration](rancher-server-configuration/eks-cluster-configuration.md) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/cluster-configuration.md index 21f3b31f422..8add5f0520a 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/cluster-configuration.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/cluster-configuration.md @@ -14,7 +14,6 @@ title: 集群配置 集群配置选项取决于 Kubernetes 集群的类型: -- [RKE 集群配置](rancher-server-configuration/rke1-cluster-configuration.md) - [RKE2 集群配置](rancher-server-configuration/rke2-cluster-configuration.md) - [K3s 集群配置](rancher-server-configuration/k3s-cluster-configuration.md) - [EKS 集群配置](rancher-server-configuration/eks-cluster-configuration.md) diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/cluster-configuration.md index 21f3b31f422..8add5f0520a 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/cluster-configuration.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/cluster-configuration.md @@ -14,7 +14,6 @@ title: 集群配置 集群配置选项取决于 Kubernetes 集群的类型: -- [RKE 集群配置](rancher-server-configuration/rke1-cluster-configuration.md) - [RKE2 集群配置](rancher-server-configuration/rke2-cluster-configuration.md) - [K3s 集群配置](rancher-server-configuration/k3s-cluster-configuration.md) - [EKS 集群配置](rancher-server-configuration/eks-cluster-configuration.md) diff --git a/versioned_docs/version-2.12/reference-guides/cluster-configuration/cluster-configuration.md b/versioned_docs/version-2.12/reference-guides/cluster-configuration/cluster-configuration.md index 8abd3377435..75bae23a493 100644 --- a/versioned_docs/version-2.12/reference-guides/cluster-configuration/cluster-configuration.md +++ b/versioned_docs/version-2.12/reference-guides/cluster-configuration/cluster-configuration.md @@ -14,7 +14,6 @@ For information on editing cluster membership, go to [this page.](../../how-to-g The cluster configuration options depend on the type of Kubernetes cluster: -- [RKE Cluster Configuration](rancher-server-configuration/rke1-cluster-configuration.md) - [RKE2 Cluster Configuration](rancher-server-configuration/rke2-cluster-configuration.md) - [K3s Cluster Configuration](rancher-server-configuration/k3s-cluster-configuration.md) - [EKS Cluster Configuration](rancher-server-configuration/eks-cluster-configuration.md) From 4c3b23bbbc06c7684fe93e85ec20c083e5e8568a Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Wed, 23 Jul 2025 15:38:18 -0700 Subject: [PATCH 20/57] Remove RKE1 references in architecture-recommendations.md --- .../architecture-recommendations.md | 42 +------------------ .../architecture-recommendations.md | 42 +------------------ .../architecture-recommendations.md | 42 +------------------ .../architecture-recommendations.md | 42 +------------------ 4 files changed, 4 insertions(+), 164 deletions(-) diff --git a/docs/reference-guides/rancher-manager-architecture/architecture-recommendations.md b/docs/reference-guides/rancher-manager-architecture/architecture-recommendations.md index b84375fbda5..99157d3bbba 100644 --- a/docs/reference-guides/rancher-manager-architecture/architecture-recommendations.md +++ b/docs/reference-guides/rancher-manager-architecture/architecture-recommendations.md @@ -32,14 +32,6 @@ One option for the underlying Kubernetes cluster is to use K3s Kubernetes. K3s i ![Architecture of a K3s Kubernetes Cluster Running the Rancher Management Server](/img/k3s-server-storage.svg) -### RKE Kubernetes Cluster Installations - -In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails. - -
Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server
- -![Architecture of an RKE Kubernetes cluster running the Rancher management server](/img/rke-server-storage.svg) - ## Recommended Load Balancer Configuration for Kubernetes Installations We recommend the following configurations for the load balancer and Ingress controllers: @@ -61,7 +53,7 @@ For the best performance and greater security, we recommend a dedicated Kubernet ## Recommended Node Roles for Kubernetes Installations -The below recommendations apply when Rancher is installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster. +The below recommendations apply when Rancher is installed on a K3s Kubernetes cluster. ### K3s Cluster Roles @@ -69,38 +61,6 @@ In K3s clusters, there are two types of nodes: server nodes and agent nodes. Bot For the cluster running the Rancher management server, we recommend using two server nodes. Agent nodes are not required. -### RKE Cluster Roles - -If Rancher is installed on an RKE Kubernetes cluster, the cluster should have three nodes, and each node should have all three Kubernetes roles: etcd, controlplane, and worker. - -### Contrasting RKE Cluster Architecture for Rancher Server and for Downstream Kubernetes Clusters - -Our recommendation for RKE node roles on the Rancher server cluster contrasts with our recommendations for the downstream user clusters that run your apps and services. - -Rancher uses RKE as a library when provisioning downstream Kubernetes clusters. Note: The capability to provision downstream K3s clusters will be added in a future version of Rancher. - -For downstream Kubernetes clusters, we recommend that each node in a user cluster should have a single role for stability and scalability. - -![Kubernetes Roles for Nodes in Rancher Server Cluster vs. User Clusters](/img/rancher-architecture-node-roles.svg) - -RKE only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don't interfere with the Kubernetes master or cluster data as your services scale. - -We recommend that downstream user clusters should have at least: - -- **Three nodes with only the etcd role** to maintain a quorum if one node is lost, making the state of your cluster highly available -- **Two nodes with only the controlplane role** to make the master component highly available -- **One or more nodes with only the worker role** to run the Kubernetes node components, as well as the workloads for your apps and services - -With that said, it is safe to use all three roles on three nodes when setting up the Rancher server because: - -* It allows one `etcd` node failure. -* It maintains multiple instances of the master components by having multiple `controlplane` nodes. -* No other workloads than Rancher itself should be created on this cluster. - -Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of downstream clusters. - -For more best practices for downstream clusters, refer to the [production checklist](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md) or our [best practices guide.](../best-practices/best-practices.md) - ## Architecture for an Authorized Cluster Endpoint (ACE) If you are using an [authorized cluster endpoint (ACE),](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) we recommend creating an FQDN pointing to a load balancer which balances traffic across your nodes with the `controlplane` role. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/architecture-recommendations.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/architecture-recommendations.md index 7e61817fbe6..2b518f95407 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/architecture-recommendations.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/architecture-recommendations.md @@ -28,14 +28,6 @@ title: 架构推荐 ![运行 Rancher Management Server 的 K3s Kubernetes 集群的架构](/img/k3s-server-storage.svg) -### RKE Kubernetes 集群安装 - -在 RKE 安装中,集群数据在集群中的三个 etcd 节点上复制,以在某个节点发生故障时提供冗余和进行数据复制。 - -
运行 Rancher Management Server 的 RKE Kubernetes 集群的架构
- -![运行 Rancher Management Server 的 RKE Kubernetes 集群的架构](/img/rke-server-storage.svg) - ## Kubernetes 安装的负载均衡器推荐配置 我们建议你为负载均衡器和 Ingress Controller 使用以下配置: @@ -57,7 +49,7 @@ title: 架构推荐 ## Kubernetes 安装的推荐节点角色 -如果 Rancher 安装在 K3s Kubernetes 或 RKE Kubernetes 集群上,以下建议适用。 +如果 Rancher 安装在 K3s Kubernetes 上,则适用以下建议。 ### K3s 集群角色 @@ -65,38 +57,6 @@ title: 架构推荐 对于运行 Rancher Management Server 的集群,我们建议使用两个 server 节点。不需要 Agent 节点。 -### RKE 集群角色 - -如果 Rancher 安装在 RKE Kubernetes 集群上,该集群应具有三个节点,并且每个节点都应具有所有三个 Kubernetes 角色,分别是 etcd,controlplane 和 worker。 - -### Rancher Server 和下游 Kubernetes 集群的 RKE 集群架构对比 - -我们对 Rancher Server 集群上 RKE 节点角色建议,与对运行你的应用和服务的下游集群的建议相反。 - -在配置下游 Kubernetes 集群时,Rancher 使用 RKE 作为创建下游 Kubernetes 集群的工具。注意:Rancher 将在未来的版本中添加配置下游 K3s 集群的功能。 - -我们建议下游 Kubernetes 集群中的每个节点都只分配一个角色,以确保稳定性和可扩展性。 - -![Rancher Server 集群中和下游集群中节点的 Kubernetes 角色对比](/img/rancher-architecture-node-roles.svg) - -RKE 每个角色至少需要一个节点,但并不强制每个节点只能有一个角色。但是,我们建议为运行应用的集群中的每个节点,使用单独的角色,以保证在服务拓展时,worker 节点上的工作负载不影响 Kubernetes master 或集群的数据。 - -以下是我们对下游集群的最低配置建议: - -- **三个仅使用 etcd 角色的节点** ,以在三个节点中其中一个发生故障时,仍能保障集群的高可用性。 -- **两个只有 controlplane 角色的节点** ,以保证 master 组件的高可用性。 -- **一个或多个只有 worker 角色的节点**,用于运行 Kubernetes 节点组件,以及你部署的服务或应用的工作负载。 - -在设置 Rancher Server 时,在三个节点上使用全部这三个角色也是安全的,因为: - -* 它允许一个 `etcd` 节点故障。 -* 它通过多个 `controlplane` 节点来维护 master 组件的多个实例。 -* 此集群上没有创建除 Rancher 之外的其他工作负载。 - -由于 Rancher Server 集群中没有部署其他工作负载,因此在大多数情况下,这个集群都不需要使用我们出于可扩展性和可用性的考虑,而为下游集群推荐的架构。 - -有关下游集群的最佳实践,请查看[生产环境清单](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)或[最佳实践](../best-practices/best-practices.md)。 - ## 授权集群端点架构 如果你使用[授权集群端点(ACE)](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点),我们建议你创建一个指向负载均衡器的 FQDN,这个负载均衡器把流量转到所有角色为 `controlplane` 的节点。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/architecture-recommendations.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/architecture-recommendations.md index 7e61817fbe6..2b518f95407 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/architecture-recommendations.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/architecture-recommendations.md @@ -28,14 +28,6 @@ title: 架构推荐 ![运行 Rancher Management Server 的 K3s Kubernetes 集群的架构](/img/k3s-server-storage.svg) -### RKE Kubernetes 集群安装 - -在 RKE 安装中,集群数据在集群中的三个 etcd 节点上复制,以在某个节点发生故障时提供冗余和进行数据复制。 - -
运行 Rancher Management Server 的 RKE Kubernetes 集群的架构
- -![运行 Rancher Management Server 的 RKE Kubernetes 集群的架构](/img/rke-server-storage.svg) - ## Kubernetes 安装的负载均衡器推荐配置 我们建议你为负载均衡器和 Ingress Controller 使用以下配置: @@ -57,7 +49,7 @@ title: 架构推荐 ## Kubernetes 安装的推荐节点角色 -如果 Rancher 安装在 K3s Kubernetes 或 RKE Kubernetes 集群上,以下建议适用。 +如果 Rancher 安装在 K3s Kubernetes 上,则适用以下建议。 ### K3s 集群角色 @@ -65,38 +57,6 @@ title: 架构推荐 对于运行 Rancher Management Server 的集群,我们建议使用两个 server 节点。不需要 Agent 节点。 -### RKE 集群角色 - -如果 Rancher 安装在 RKE Kubernetes 集群上,该集群应具有三个节点,并且每个节点都应具有所有三个 Kubernetes 角色,分别是 etcd,controlplane 和 worker。 - -### Rancher Server 和下游 Kubernetes 集群的 RKE 集群架构对比 - -我们对 Rancher Server 集群上 RKE 节点角色建议,与对运行你的应用和服务的下游集群的建议相反。 - -在配置下游 Kubernetes 集群时,Rancher 使用 RKE 作为创建下游 Kubernetes 集群的工具。注意:Rancher 将在未来的版本中添加配置下游 K3s 集群的功能。 - -我们建议下游 Kubernetes 集群中的每个节点都只分配一个角色,以确保稳定性和可扩展性。 - -![Rancher Server 集群中和下游集群中节点的 Kubernetes 角色对比](/img/rancher-architecture-node-roles.svg) - -RKE 每个角色至少需要一个节点,但并不强制每个节点只能有一个角色。但是,我们建议为运行应用的集群中的每个节点,使用单独的角色,以保证在服务拓展时,worker 节点上的工作负载不影响 Kubernetes master 或集群的数据。 - -以下是我们对下游集群的最低配置建议: - -- **三个仅使用 etcd 角色的节点** ,以在三个节点中其中一个发生故障时,仍能保障集群的高可用性。 -- **两个只有 controlplane 角色的节点** ,以保证 master 组件的高可用性。 -- **一个或多个只有 worker 角色的节点**,用于运行 Kubernetes 节点组件,以及你部署的服务或应用的工作负载。 - -在设置 Rancher Server 时,在三个节点上使用全部这三个角色也是安全的,因为: - -* 它允许一个 `etcd` 节点故障。 -* 它通过多个 `controlplane` 节点来维护 master 组件的多个实例。 -* 此集群上没有创建除 Rancher 之外的其他工作负载。 - -由于 Rancher Server 集群中没有部署其他工作负载,因此在大多数情况下,这个集群都不需要使用我们出于可扩展性和可用性的考虑,而为下游集群推荐的架构。 - -有关下游集群的最佳实践,请查看[生产环境清单](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)或[最佳实践](../best-practices/best-practices.md)。 - ## 授权集群端点架构 如果你使用[授权集群端点(ACE)](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点),我们建议你创建一个指向负载均衡器的 FQDN,这个负载均衡器把流量转到所有角色为 `controlplane` 的节点。 diff --git a/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/architecture-recommendations.md b/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/architecture-recommendations.md index b84375fbda5..99157d3bbba 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/architecture-recommendations.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/architecture-recommendations.md @@ -32,14 +32,6 @@ One option for the underlying Kubernetes cluster is to use K3s Kubernetes. K3s i ![Architecture of a K3s Kubernetes Cluster Running the Rancher Management Server](/img/k3s-server-storage.svg) -### RKE Kubernetes Cluster Installations - -In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails. - -
Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server
- -![Architecture of an RKE Kubernetes cluster running the Rancher management server](/img/rke-server-storage.svg) - ## Recommended Load Balancer Configuration for Kubernetes Installations We recommend the following configurations for the load balancer and Ingress controllers: @@ -61,7 +53,7 @@ For the best performance and greater security, we recommend a dedicated Kubernet ## Recommended Node Roles for Kubernetes Installations -The below recommendations apply when Rancher is installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster. +The below recommendations apply when Rancher is installed on a K3s Kubernetes cluster. ### K3s Cluster Roles @@ -69,38 +61,6 @@ In K3s clusters, there are two types of nodes: server nodes and agent nodes. Bot For the cluster running the Rancher management server, we recommend using two server nodes. Agent nodes are not required. -### RKE Cluster Roles - -If Rancher is installed on an RKE Kubernetes cluster, the cluster should have three nodes, and each node should have all three Kubernetes roles: etcd, controlplane, and worker. - -### Contrasting RKE Cluster Architecture for Rancher Server and for Downstream Kubernetes Clusters - -Our recommendation for RKE node roles on the Rancher server cluster contrasts with our recommendations for the downstream user clusters that run your apps and services. - -Rancher uses RKE as a library when provisioning downstream Kubernetes clusters. Note: The capability to provision downstream K3s clusters will be added in a future version of Rancher. - -For downstream Kubernetes clusters, we recommend that each node in a user cluster should have a single role for stability and scalability. - -![Kubernetes Roles for Nodes in Rancher Server Cluster vs. User Clusters](/img/rancher-architecture-node-roles.svg) - -RKE only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don't interfere with the Kubernetes master or cluster data as your services scale. - -We recommend that downstream user clusters should have at least: - -- **Three nodes with only the etcd role** to maintain a quorum if one node is lost, making the state of your cluster highly available -- **Two nodes with only the controlplane role** to make the master component highly available -- **One or more nodes with only the worker role** to run the Kubernetes node components, as well as the workloads for your apps and services - -With that said, it is safe to use all three roles on three nodes when setting up the Rancher server because: - -* It allows one `etcd` node failure. -* It maintains multiple instances of the master components by having multiple `controlplane` nodes. -* No other workloads than Rancher itself should be created on this cluster. - -Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of downstream clusters. - -For more best practices for downstream clusters, refer to the [production checklist](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md) or our [best practices guide.](../best-practices/best-practices.md) - ## Architecture for an Authorized Cluster Endpoint (ACE) If you are using an [authorized cluster endpoint (ACE),](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) we recommend creating an FQDN pointing to a load balancer which balances traffic across your nodes with the `controlplane` role. From 0b5281dbf56e7af75fd46f3e856e9466aff9e5e6 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Tue, 15 Jul 2025 09:28:02 +0530 Subject: [PATCH 21/57] refactor: move advance guide of cis benchmark to compliance --- ...reate-a-custom-benchmark-version-to-run.md | 13 - ...able-alerting-for-rancher-cis-benchmark.md | 24 - .../install-rancher-cis-benchmark.md | 15 - .../cis-scan-guides/run-a-scan.md | 26 - .../uninstall-rancher-cis-benchmark.md | 13 - .../cis-scan-guides/view-reports.md | 23 - .../compliance-scan-guides.md} | 12 +- ...-alerts-for-periodic-scan-on-a-schedule.md | 16 +- ...eate-a-custom-compliance-version-to-run.md | 13 + .../enable-alerting-for-rancher-compliance.md | 24 + .../install-rancher-compliance.md | 21 + .../run-a-scan-periodically-on-a-schedule.md | 8 +- .../compliance-scan-guides/run-a-scan.md | 26 + .../skip-tests.md | 20 +- .../uninstall-rancher-compliance.md | 13 + .../compliance-scan-guides/view-reports.md | 23 + .../monitoring-v2-configuration/receivers.md | 12 +- docusaurus.config.js | 1620 +++++++++-------- sidebars.js | 892 +++++---- 19 files changed, 1457 insertions(+), 1357 deletions(-) delete mode 100644 docs/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run.md delete mode 100644 docs/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md delete mode 100644 docs/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark.md delete mode 100644 docs/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan.md delete mode 100644 docs/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark.md delete mode 100644 docs/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports.md rename docs/how-to-guides/advanced-user-guides/{cis-scan-guides/cis-scan-guides.md => compliance-scan-guides/compliance-scan-guides.md} (56%) rename docs/how-to-guides/advanced-user-guides/{cis-scan-guides => compliance-scan-guides}/configure-alerts-for-periodic-scan-on-a-schedule.md (56%) create mode 100644 docs/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md create mode 100644 docs/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md create mode 100644 docs/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md rename docs/how-to-guides/advanced-user-guides/{cis-scan-guides => compliance-scan-guides}/run-a-scan-periodically-on-a-schedule.md (75%) create mode 100644 docs/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan.md rename docs/how-to-guides/advanced-user-guides/{cis-scan-guides => compliance-scan-guides}/skip-tests.md (55%) create mode 100644 docs/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance.md create mode 100644 docs/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports.md diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run.md b/docs/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run.md deleted file mode 100644 index 8d3b66c7e4e..00000000000 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Create a Custom Benchmark Version for Running a Cluster Scan ---- - - - - - -There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. - -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. - -For details, see [this page.](../../../integrations-in-rancher/cis-scans/custom-benchmark.md) \ No newline at end of file diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md b/docs/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md deleted file mode 100644 index ef2b5ae330d..00000000000 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: Enable Alerting for Rancher CIS Benchmark ---- - - - - - -Alerts can be configured to be sent out for a scan that runs on a schedule. - -:::note Prerequisite: - -Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md) - -While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-cis-scan-alerts) - -::: - -While installing or upgrading the `rancher-cis-benchmark` Helm chart, set the following flag to `true` in the `values.yaml`: - -```yaml -alerts: - enabled: true -``` \ No newline at end of file diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark.md b/docs/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark.md deleted file mode 100644 index c6987a97c64..00000000000 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Install Rancher CIS Benchmark ---- - - - - - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to install CIS Benchmark and click **Explore**. -1. In the left navigation bar, click **Apps > Charts**. -1. Click **CIS Benchmark** -1. Click **Install**. - -**Result:** The CIS scan application is deployed on the Kubernetes cluster. diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan.md b/docs/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan.md deleted file mode 100644 index 2fede69bee6..00000000000 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Run a Scan ---- - - - - - -When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile. - -:::note - -There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. - -::: - -To run a scan, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Click **Create**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. -1. Click **Create**. - -**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. \ No newline at end of file diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark.md b/docs/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark.md deleted file mode 100644 index df23f7abbdc..00000000000 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uninstall Rancher CIS Benchmark ---- - - - - - -1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps > Installed Apps**. -1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`. -1. Click **Delete** and confirm **Delete**. - -**Result:** The `rancher-cis-benchmark` application is uninstalled. \ No newline at end of file diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports.md b/docs/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports.md deleted file mode 100644 index bb9045033bc..00000000000 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: View Reports ---- - - - - - -To view the generated CIS scan reports, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. - -One can download the report from the Scans list or from the scan detail page. - -To get the verbose version of the CIS scan results, run the following command on the cluster that was scanned. Note that the scan must be completed before this can be done. - -```console -export REPORT="scan-report-name" -kubectl get clusterscanreport $REPORT -o json |jq ".spec.reportJSON | fromjson" | jq -r ".actual_value_map_data" | base64 -d | gunzip | jq . -``` diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md similarity index 56% rename from docs/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md rename to docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md index a7c6ed43472..4b304a13bea 100644 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md @@ -1,17 +1,17 @@ --- -title: CIS Scan Guides +title: Compliance Scan Guides --- - + -- [Install rancher-cis-benchmark](install-rancher-cis-benchmark.md) -- [Uninstall rancher-cis-benchmark](uninstall-rancher-cis-benchmark.md) +- [Install rancher-compliance](install-rancher-compliance.md) +- [Uninstall rancher-compliance](uninstall-rancher-compliance.md) - [Run a Scan](run-a-scan.md) - [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md) - [Skip Tests](skip-tests.md) - [View Reports](view-reports.md) -- [Enable Alerting for rancher-cis-benchmark](enable-alerting-for-rancher-cis-benchmark.md) +- [Enable Alerting for rancher-compliance](enable-alerting-for-rancher-compliance.md) - [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md) -- [Create a Custom Benchmark Version to Run](create-a-custom-benchmark-version-to-run.md) \ No newline at end of file +- [Create a Custom Benchmark Version to Run](create-a-custom-benchmark-version-to-run.md) diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md similarity index 56% rename from docs/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md rename to docs/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md index 204f95c05bd..5dfad6be847 100644 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md @@ -3,7 +3,7 @@ title: Configure Alerts for Periodic Scan on a Schedule --- - + It is possible to run a ClusterScan on a schedule. @@ -12,27 +12,27 @@ A scheduled scan can also specify if you should receive alerts when the scan com Alerts are supported only for a scan that runs on a schedule. -The CIS Benchmark application supports two types of alerts: +The compliance application supports two types of alerts: - Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name. - Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state. :::note Prerequisite -Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md) +Before enabling alerts for `rancher-compliance`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md) -While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-cis-scan-alerts) +While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-compliance-scan-alerts) ::: To configure alerts for a scan that runs on a schedule, -1. Please enable alerts on the `rancher-cis-benchmark` application. For more information, see [this page](../../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md). +1. Please enable alerts on the `rancher-compliance` application. For more information, see [this page](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md). 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. +1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. +1. Click **compliance > Scan**. 1. Click **Create**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Choose a cluster scan profile. The profile determines which compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. 1. Choose the option **Run scan on a schedule**. 1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**. 1. Check the boxes next to the Alert types under **Alerting**. diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md new file mode 100644 index 00000000000..97a896db883 --- /dev/null +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md @@ -0,0 +1,13 @@ +--- +title: Create a Custom Compliance Version for Running a Cluster Scan +--- + + + + + +There could be some Kubernetes cluster setups that require custom configurations of the Compliance tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream Compliance look for them. + +It is now possible to create a custom compliance version for running a cluster scan using the `rancher-compliance` application. + +For details, see [this page.](../../../integrations-in-rancher/cis-scans/custom-benchmark.md) \ No newline at end of file diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md new file mode 100644 index 00000000000..d5328a0dd0c --- /dev/null +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md @@ -0,0 +1,24 @@ +--- +title: Enable Alerting for Rancher Compliance +--- + + + + + +Alerts can be configured to be sent out for a scan that runs on a schedule. + +:::note Prerequisite: + +Before enabling alerts for `rancher-compliance`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md) + +While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-compliance-scan-alerts) + +::: + +While installing or upgrading the `rancher-compliance` Helm chart, set the following flag to `true` in the `values.yaml`: + +```yaml +alerts: + enabled: true +``` \ No newline at end of file diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md new file mode 100644 index 00000000000..d7a00786ea5 --- /dev/null +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md @@ -0,0 +1,21 @@ +--- +title: Install Rancher Compliance +--- + + + + + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to install Compliance and click **Explore**. +1. In the left navigation bar, click **Apps > Charts**. +1. Click **Compliance** +1. Click **Install**. + +**Result:** The compliance scan application is deployed on the Kubernetes cluster. + +:::note + +If you are running Kubernetes v1.24 or earlier, and have a [Pod Security Policy](../../new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md) (PSP) hardened cluster, Compliance 4.0.0 and later disable PSPs by default. To install Compliance on a PSP-hardened cluster, set `global.psp.enabled` to `true` in the values before installing the chart. [Pod Security Admission](../../new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md) (PSA) hardened clusters aren't affected. + +::: diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule.md similarity index 75% rename from docs/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule.md rename to docs/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule.md index 076fbdf409b..49a9126da64 100644 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule.md @@ -3,15 +3,15 @@ title: Run a Scan Periodically on a Schedule --- - + To run a ClusterScan on a schedule, 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. +1. Click **Compliance > Scan**. +1. Choose a cluster scan profile. The profile determines which Compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. 1. Choose the option **Run scan on a schedule**. 1. Enter a valid cron schedule expression in the field **Schedule**. 1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan.md new file mode 100644 index 00000000000..55fc296f88d --- /dev/null +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan.md @@ -0,0 +1,26 @@ +--- +title: Run a Scan +--- + + + + + +When a ClusterScan custom resource is created, it launches a new compliance scan on the cluster for the chosen ClusterScanProfile. + +:::note + +There is currently a limitation of running only one compliance scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. + +::: + +To run a scan, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a compliance scan and click **Explore**. +1. Click **Compliance > Scan**. +1. Click **Create**. +1. Choose a cluster scan profile. The profile determines which Compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Click **Create**. + +**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. \ No newline at end of file diff --git a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md similarity index 55% rename from docs/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md rename to docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md index 7492bc03f0b..c28eb027a4b 100644 --- a/docs/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md @@ -3,36 +3,36 @@ title: Skip Tests --- - + -CIS scans can be run using test profiles with user-defined skips. +Compliancescans can be run using test profiles with user-defined skips. -To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. +To skip tests, you will create a custom Compliancescan profile. A profile contains the configuration for the Compliancescan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Profile**. -1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: +1. On the **Clusters** page, go to the cluster where you want to run a Compliancescan and click **Explore**. +1. Click **Compliance > Profile**. +1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant Compliance as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: ```yaml - apiVersion: cis.cattle.io/v1 + apiVersion: compliance.cattle.io/v1 kind: ClusterScanProfile metadata: annotations: meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system + meta.helm.sh/release-namespace: compliance-operator-system labels: app.kubernetes.io/managed-by: Helm name: "" spec: - benchmarkVersion: cis-1.5 + benchmarkVersion: rke2-cis-1.7 skipTests: - "1.1.20" - "1.1.21" ``` 1. Click **Create**. -**Result:** A new CIS scan profile is created. +**Result:** A new compliance profile is created. When you [run a scan](./run-a-scan.md) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance.md new file mode 100644 index 00000000000..313acf79555 --- /dev/null +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance.md @@ -0,0 +1,13 @@ +--- +title: Uninstall Rancher Compliance +--- + + + + + +1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps > Installed Apps**. +1. Go to the `compliance-operator-system` namespace and check the boxes next to `rancher-compliance-crd` and `rancher-compliance`. +1. Click **Delete** and confirm **Delete**. + +**Result:** The `rancher-compliance` application is uninstalled. \ No newline at end of file diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports.md new file mode 100644 index 00000000000..ad042390485 --- /dev/null +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports.md @@ -0,0 +1,23 @@ +--- +title: View Reports +--- + + + + + +To view the generated Compliance scan reports, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. +1. Click **Compliance > Scan**. +1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. + +One can download the report from the Scans list or from the scan detail page. + +To get the verbose version of the compliance scan results, run the following command on the cluster that was scanned. Note that the scan must be completed before this can be done. + +```console +export REPORT="scan-report-name" +kubectl get clusterscanreports.compliance.cattle.io $REPORT -o json |jq ".spec.reportJSON | fromjson" | jq -r ".actual_value_map_data" | base64 -d | gunzip | jq . +``` diff --git a/docs/reference-guides/monitoring-v2-configuration/receivers.md b/docs/reference-guides/monitoring-v2-configuration/receivers.md index b1237e3646b..811abcf8e96 100644 --- a/docs/reference-guides/monitoring-v2-configuration/receivers.md +++ b/docs/reference-guides/monitoring-v2-configuration/receivers.md @@ -351,29 +351,29 @@ receivers: - service_key: 'database-integration-key' ``` -## Example Route Config for CIS Scan Alerts +## Example Route Config for Compliance Scan Alerts -While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. +While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. -For example, the following example route configuration could be used with a Slack receiver named `test-cis`: +For example, the following example route configuration could be used with a Slack receiver named `test-compliance`: ```yaml spec: - receiver: test-cis + receiver: test-compliance group_by: # - string group_wait: 30s group_interval: 30s repeat_interval: 30s match: - job: rancher-cis-scan + job: rancher-compliance-scan # key: string match_re: {} # key: string ``` -For more information on enabling alerting for `rancher-cis-benchmark`, see [this section.](../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md) +For more information on enabling alerting for `rancher-cis-benchmark`, see [this section.](../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-compliance.md) ## Trusted CA for Notifiers diff --git a/docusaurus.config.js b/docusaurus.config.js index abadb6b6c6c..9dd2f0eb776 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -3,41 +3,41 @@ const tailwindPlugin = require('./plugins/tailwind-plugin.cjs'); module.exports = { - title: 'Rancher', - tagline: '', - url: 'https://ranchermanager.docs.rancher.com/', - baseUrl: '/', - onBrokenLinks: 'warn', - onBrokenMarkdownLinks: 'warn', - favicon: 'img/favicon.png', - organizationName: 'rancher', // Usually your GitHub org/user name. - projectName: 'rancher-docs', // Usually your repo name. + title: "Rancher", + tagline: "", + url: "https://ranchermanager.docs.rancher.com/", + baseUrl: "/", + onBrokenLinks: "warn", + onBrokenMarkdownLinks: "warn", + favicon: "img/favicon.png", + organizationName: "rancher", // Usually your GitHub org/user name. + projectName: "rancher-docs", // Usually your repo name. trailingSlash: false, i18n: { - defaultLocale: 'en', - locales: ['en', 'zh'], + defaultLocale: "en", + locales: ["en", "zh"], localeConfigs: { en: { - label: 'English', + label: "English", }, zh: { - label: '简体中文', + label: "简体中文", }, }, }, webpack: { jsLoader: (isServer) => ({ - loader: require.resolve('swc-loader'), + loader: require.resolve("swc-loader"), options: { jsc: { parser: { - syntax: 'typescript', + syntax: "typescript", tsx: true, }, - target: 'es2017', + target: "es2017", }, module: { - type: isServer ? 'commonjs' : 'es6', + type: isServer ? "commonjs" : "es6", }, }, }), @@ -50,12 +50,12 @@ module.exports = { }, algolia: { // The application ID provided by Algolia - appId: '30NEY6C9UY', + appId: "30NEY6C9UY", // Public API key: it is safe to commit it - apiKey: '8df59222c0ad79fdacb4d45d11e21d2e', + apiKey: "8df59222c0ad79fdacb4d45d11e21d2e", - indexName: 'docs_ranchermanager_rancher_io', + indexName: "docs_ranchermanager_rancher_io", // Optional: see doc section below contextualSearch: true, @@ -64,1485 +64,1589 @@ module.exports = { searchParameters: {}, // Optional: path for search page that enabled by default (`false` to disable it) - searchPagePath: 'search', + searchPagePath: "search", //... other Algolia params }, colorMode: { // 'light' | 'dark' - defaultMode: 'light', + defaultMode: "light", // Hides the switch in the navbar // Useful if you want to support a single color mode disableSwitch: true, }, prism: { - additionalLanguages: ['rust'], + additionalLanguages: ["rust"], }, navbar: { - title: '', + title: "", logo: { - alt: 'logo', - src: 'img/rancher-logo-horiz-color.svg', + alt: "logo", + src: "img/rancher-logo-horiz-color.svg", // href: 'en', }, items: [ { - type: 'docsVersionDropdown', - position: 'left', - dropdownItemsAfter: [{to: '/versions', label: 'All versions'}], + type: "docsVersionDropdown", + position: "left", + dropdownItemsAfter: [{ to: "/versions", label: "All versions" }], dropdownActiveClassDisabled: false, }, { - type: 'localeDropdown', - position: 'left', + type: "localeDropdown", + position: "left", }, { - type: 'search', - position: 'left', + type: "search", + position: "left", }, { - type: 'dropdown', - label: 'Quick Links', - position: 'right', + type: "dropdown", + label: "Quick Links", + position: "right", items: [ { - href: 'https://github.com/rancher/rancher', - label: 'GitHub', + href: "https://github.com/rancher/rancher", + label: "GitHub", }, { - href: 'https://github.com/rancher/rancher-docs', - label: 'Docs GitHub', + href: "https://github.com/rancher/rancher-docs", + label: "Docs GitHub", }, - ] + ], }, { - type: 'dropdown', - label: 'More from SUSE', - position: 'right', + type: "dropdown", + label: "More from SUSE", + position: "right", items: [ { - href: 'https://www.rancher.com', - label: 'Rancher', - className: 'navbar__icon navbar__rancher' + href: "https://www.rancher.com", + label: "Rancher", + className: "navbar__icon navbar__rancher", }, { - type: 'html', + type: "html", value: '
', }, { - href: 'https://elemental.docs.rancher.com/', - label: 'Elemental', - className: 'navbar__icon navbar__elemental' + href: "https://elemental.docs.rancher.com/", + label: "Elemental", + className: "navbar__icon navbar__elemental", }, { - href: 'https://fleet.rancher.io/', - label: 'Fleet', - className: 'navbar__icon navbar__fleet' + href: "https://fleet.rancher.io/", + label: "Fleet", + className: "navbar__icon navbar__fleet", }, { - href: 'https://harvesterhci.io', - label: 'Harvester', - className: 'navbar__icon navbar__harvester' + href: "https://harvesterhci.io", + label: "Harvester", + className: "navbar__icon navbar__harvester", }, { - href: 'https://rancherdesktop.io/', - label: 'Rancher Desktop', - className: 'navbar__icon navbar__rancher__desktop' + href: "https://rancherdesktop.io/", + label: "Rancher Desktop", + className: "navbar__icon navbar__rancher__desktop", }, { - type: 'html', + type: "html", value: '
', }, { - href: 'https://opensource.suse.com', - label: 'More Projects...', - className: 'navbar__icon navbar__suse' + href: "https://opensource.suse.com", + label: "More Projects...", + className: "navbar__icon navbar__suse", }, - ] - } + ], + }, ], }, footer: { - style: 'dark', + style: "dark", links: [], copyright: `Copyright © ${new Date().getFullYear()} SUSE Rancher. All Rights Reserved.`, }, }, presets: [ [ - '@docusaurus/preset-classic', + "@docusaurus/preset-classic", { docs: { - routeBasePath: '/', // Serve the docs at the site's root + routeBasePath: "/", // Serve the docs at the site's root /* other docs plugin options */ - sidebarPath: require.resolve('./sidebars.js'), + sidebarPath: require.resolve("./sidebars.js"), showLastUpdateTime: true, - editUrl: 'https://github.com/rancher/rancher-docs/edit/main/', - lastVersion: 'current', + editUrl: "https://github.com/rancher/rancher-docs/edit/main/", + lastVersion: "current", versions: { current: { - label: 'Latest', + label: "Latest", }, '2.12': { - label: 'v2.12', - path: 'v2.12', - banner: 'none' + label: "v2.12", + path: "v2.12", + banner: "none" }, - '2.11': { - label: 'v2.11', - path: 'v2.11', - banner: 'none' + 2.11: { + label: "v2.11", + path: "v2.11", + banner: "none", }, - '2.10': { - label: 'v2.10', - path: 'v2.10', - banner: 'none' + "2.10": { + label: "v2.10", + path: "v2.10", + banner: "none", }, 2.9: { - label: 'v2.9', - path: 'v2.9', - banner: 'none' + label: "v2.9", + path: "v2.9", + banner: "none", }, 2.8: { - label: 'v2.8', - path: 'v2.8', - className: 'toArchive' + label: "v2.8", + path: "v2.8", + className: "toArchive", }, 2.7: { - label: 'v2.7 (Archived)', - path: 'v2.7', + label: "v2.7 (Archived)", + path: "v2.7", banner: `none`, - noIndex: true + noIndex: true, }, 2.6: { - label: 'v2.6 (Archived)', - path: 'v2.6', + label: "v2.6 (Archived)", + path: "v2.6", banner: `none`, - noIndex: true + noIndex: true, }, 2.5: { - label: 'v2.5 (Archived)', - path: 'v2.5', + label: "v2.5 (Archived)", + path: "v2.5", banner: `none`, - noIndex: true + noIndex: true, }, - '2.0-2.4': { - label: 'v2.0-v2.4 (Archived)', - path: 'v2.0-v2.4', - banner: 'none', - noIndex: true + "2.0-2.4": { + label: "v2.0-v2.4 (Archived)", + path: "v2.0-v2.4", + banner: "none", + noIndex: true, }, }, }, blog: false, // Optional: disable the blog plugin // ... theme: { - customCss: [require.resolve('./src/css/custom.css')], + customCss: [require.resolve("./src/css/custom.css")], }, googleTagManager: { - containerId: 'GTM-57KS2MW', + containerId: "GTM-57KS2MW", }, }, ], [ - 'redocusaurus', + "redocusaurus", { // Plugin Options for loading OpenAPI files specs: [ { - id: 'rancher-api-v2-11', - spec: 'openapi/swagger-v2.11.json', + id: "rancher-api-v2-11", + spec: "openapi/swagger-v2.11.json", // route: '/api/', }, { - id: 'rancher-api-v2-10', - spec: 'openapi/swagger-v2.10.json', + id: "rancher-api-v2-10", + spec: "openapi/swagger-v2.10.json", // route: '/api/', }, { - id: 'rancher-api-v2-9', - spec: 'openapi/swagger-v2.9.json', + id: "rancher-api-v2-9", + spec: "openapi/swagger-v2.9.json", // route: '/api/', }, { - id: 'rancher-api-v2-8', - spec: 'openapi/swagger-v2.8.json', + id: "rancher-api-v2-8", + spec: "openapi/swagger-v2.8.json", // route: '/api/', }, ], // Theme Options for modifying how redoc renders them theme: { // Change with your site colors - primaryColor: '#1890ff', + primaryColor: "#1890ff", }, }, ], ], plugins: [ - tailwindPlugin, - [ - '@docusaurus/plugin-client-redirects', + tailwindPlugin, + [ + "@docusaurus/plugin-client-redirects", { - fromExtensions: ['html', 'htm'], + fromExtensions: ["html", "htm"], redirects: [ - { // Redirects for multi-cluster apps removal (rancher-docs/issues/734) - to: '/integrations-in-rancher/fleet', - from: ['/pages-for-subheaders/deploy-apps-across-clusters', '/how-to-guides/new-user-guides/deploy-apps-across-clusters', '/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet', '/how-to-guides/new-user-guides/deploy-apps-across-clusters/multi-cluster-apps'] + { + // Redirects for multi-cluster apps removal (rancher-docs/issues/734) + to: "/integrations-in-rancher/fleet", + from: [ + "/pages-for-subheaders/deploy-apps-across-clusters", + "/how-to-guides/new-user-guides/deploy-apps-across-clusters", + "/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet", + "/how-to-guides/new-user-guides/deploy-apps-across-clusters/multi-cluster-apps", + ], }, { - to: '/v2.8/integrations-in-rancher/fleet', - from: ['/v2.8/pages-for-subheaders/deploy-apps-across-clusters', '/v2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters', '/v2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet', '/v2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters/multi-cluster-apps'] - },// Redirects for multi-cluster apps removal (rancher-docs/issues/734) (end) + to: "/v2.8/integrations-in-rancher/fleet", + from: [ + "/v2.8/pages-for-subheaders/deploy-apps-across-clusters", + "/v2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters", + "/v2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters/fleet", + "/v2.8/how-to-guides/new-user-guides/deploy-apps-across-clusters/multi-cluster-apps", + ], + }, // Redirects for multi-cluster apps removal (rancher-docs/issues/734) (end) { - to: '/faq/deprecated-features', - from: '/faq/deprecated-features-in-v2.5' + to: "/faq/deprecated-features", + from: "/faq/deprecated-features-in-v2.5", }, { - to: '/v2.8/faq/deprecated-features', - from: '/v2.8/faq/deprecated-features-in-v2.5' + to: "/v2.8/faq/deprecated-features", + from: "/v2.8/faq/deprecated-features-in-v2.5", }, - { // Redirects for pages-for-subheaders removal [2.8] - to: '/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers', - from: '/v2.8/pages-for-subheaders/about-provisioning-drivers' + { + // Redirects for pages-for-subheaders removal [2.8] + to: "/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers", + from: "/v2.8/pages-for-subheaders/about-provisioning-drivers", }, { - to: '/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates', - from: '/v2.8/pages-for-subheaders/about-rke1-templates' + to: "/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates", + from: "/v2.8/pages-for-subheaders/about-rke1-templates", }, { - to: '/v2.8/how-to-guides/new-user-guides/manage-clusters/access-clusters', - from: '/v2.8/pages-for-subheaders/access-clusters' + to: "/v2.8/how-to-guides/new-user-guides/manage-clusters/access-clusters", + from: "/v2.8/pages-for-subheaders/access-clusters", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration', - from: '/v2.8/pages-for-subheaders/advanced-configuration' + to: "/v2.8/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration", + from: "/v2.8/pages-for-subheaders/advanced-configuration", }, { - to: '/v2.8/how-to-guides/advanced-user-guides', - from: '/v2.8/pages-for-subheaders/advanced-user-guides' + to: "/v2.8/how-to-guides/advanced-user-guides", + from: "/v2.8/pages-for-subheaders/advanced-user-guides", }, { - to: '/v2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install', - from: '/v2.8/pages-for-subheaders/air-gapped-helm-cli-install' + to: "/v2.8/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install", + from: "/v2.8/pages-for-subheaders/air-gapped-helm-cli-install", }, { - to: '/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config', - from: '/v2.8/pages-for-subheaders/authentication-config' + to: "/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config", + from: "/v2.8/pages-for-subheaders/authentication-config", }, { - to: '/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration', - from: '/v2.8/pages-for-subheaders/authentication-permissions-and-global-configuration' + to: "/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration", + from: "/v2.8/pages-for-subheaders/authentication-permissions-and-global-configuration", }, { - to: '/v2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace', - from: '/v2.8/pages-for-subheaders/aws-cloud-marketplace' + to: "/v2.8/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace", + from: "/v2.8/pages-for-subheaders/aws-cloud-marketplace", }, { - to: '/v2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery', - from: '/v2.8/pages-for-subheaders/backup-restore-and-disaster-recovery' + to: "/v2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery", + from: "/v2.8/pages-for-subheaders/backup-restore-and-disaster-recovery", }, { - to: '/v2.8/reference-guides/backup-restore-configuration', - from: '/v2.8/pages-for-subheaders/backup-restore-configuration' + to: "/v2.8/reference-guides/backup-restore-configuration", + from: "/v2.8/pages-for-subheaders/backup-restore-configuration", }, { - to: '/v2.8/reference-guides/best-practices', - from: '/v2.8/pages-for-subheaders/best-practices' + to: "/v2.8/reference-guides/best-practices", + from: "/v2.8/pages-for-subheaders/best-practices", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters', - from: '/v2.8/pages-for-subheaders/checklist-for-production-ready-clusters' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters", + from: "/v2.8/pages-for-subheaders/checklist-for-production-ready-clusters", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/cis-scan-guides', - from: '/v2.8/pages-for-subheaders/cis-scan-guides' + to: "/v2.8/how-to-guides/advanced-user-guides/cis-scan-guides", + from: "/v2.8/pages-for-subheaders/cis-scan-guides", }, { - to: '/v2.8/integrations-in-rancher/cis-scans', - from: '/v2.8/pages-for-subheaders/cis-scans' + to: "/v2.8/integrations-in-rancher/cis-scans", + from: "/v2.8/pages-for-subheaders/cis-scans", }, { - to: '/v2.8/reference-guides/cli-with-rancher', - from: '/v2.8/pages-for-subheaders/cli-with-rancher' + to: "/v2.8/reference-guides/cli-with-rancher", + from: "/v2.8/pages-for-subheaders/cli-with-rancher", }, { - to: '/v2.8/integrations-in-rancher/cloud-marketplace', - from: '/v2.8/pages-for-subheaders/cloud-marketplace' + to: "/v2.8/integrations-in-rancher/cloud-marketplace", + from: "/v2.8/pages-for-subheaders/cloud-marketplace", }, { - to: '/v2.8/reference-guides/cluster-configuration', - from: '/v2.8/pages-for-subheaders/cluster-configuration' + to: "/v2.8/reference-guides/cluster-configuration", + from: "/v2.8/pages-for-subheaders/cluster-configuration", }, { - to: '/v2.8/integrations-in-rancher/istio/configuration-options', - from: '/v2.8/pages-for-subheaders/configuration-options' + to: "/v2.8/integrations-in-rancher/istio/configuration-options", + from: "/v2.8/pages-for-subheaders/configuration-options", }, { - to: '/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml', - from: '/v2.8/pages-for-subheaders/configure-microsoft-ad-federation-service-saml' + to: "/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml", + from: "/v2.8/pages-for-subheaders/configure-microsoft-ad-federation-service-saml", }, { - to: '/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap', - from: '/v2.8/pages-for-subheaders/configure-openldap' + to: "/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap", + from: "/v2.8/pages-for-subheaders/configure-openldap", }, { - to: '/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml', - from: '/v2.8/pages-for-subheaders/configure-shibboleth-saml' + to: "/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml", + from: "/v2.8/pages-for-subheaders/configure-shibboleth-saml", }, { - to: '/v2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage', - from: '/v2.8/pages-for-subheaders/create-kubernetes-persistent-storage' + to: "/v2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage", + from: "/v2.8/pages-for-subheaders/create-kubernetes-persistent-storage", }, { - to: '/v2.8/integrations-in-rancher/logging/custom-resource-configuration', - from: '/v2.8/pages-for-subheaders/custom-resource-configuration' + to: "/v2.8/integrations-in-rancher/logging/custom-resource-configuration", + from: "/v2.8/pages-for-subheaders/custom-resource-configuration", }, { - to: '/v2.8/getting-started/quick-start-guides/deploy-rancher-manager', - from: '/v2.8/pages-for-subheaders/deploy-rancher-manager' + to: "/v2.8/getting-started/quick-start-guides/deploy-rancher-manager", + from: "/v2.8/pages-for-subheaders/deploy-rancher-manager", }, { - to: '/v2.8/getting-started/quick-start-guides/deploy-workloads', - from: '/v2.8/pages-for-subheaders/deploy-rancher-workloads' + to: "/v2.8/getting-started/quick-start-guides/deploy-workloads", + from: "/v2.8/pages-for-subheaders/deploy-rancher-workloads", }, { - to: '/v2.8/reference-guides/cluster-configuration/downstream-cluster-configuration', - from: '/v2.8/pages-for-subheaders/downstream-cluster-configuration' + to: "/v2.8/reference-guides/cluster-configuration/downstream-cluster-configuration", + from: "/v2.8/pages-for-subheaders/downstream-cluster-configuration", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/enable-experimental-features', - from: '/v2.8/pages-for-subheaders/enable-experimental-features' + to: "/v2.8/how-to-guides/advanced-user-guides/enable-experimental-features", + from: "/v2.8/pages-for-subheaders/enable-experimental-features", }, { - to: '/v2.8/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration', - from: '/v2.8/pages-for-subheaders/gke-cluster-configuration' + to: "/v2.8/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration", + from: "/v2.8/pages-for-subheaders/gke-cluster-configuration", }, { - to: '/v2.8/how-to-guides/new-user-guides/helm-charts-in-rancher', - from: '/v2.8/pages-for-subheaders/helm-charts-in-rancher' + to: "/v2.8/how-to-guides/new-user-guides/helm-charts-in-rancher", + from: "/v2.8/pages-for-subheaders/helm-charts-in-rancher", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/horizontal-pod-autoscaler', - from: '/v2.8/pages-for-subheaders/horizontal-pod-autoscaler' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/horizontal-pod-autoscaler", + from: "/v2.8/pages-for-subheaders/horizontal-pod-autoscaler", }, { - to: '/v2.8/how-to-guides/new-user-guides/infrastructure-setup', - from: '/v2.8/pages-for-subheaders/infrastructure-setup' + to: "/v2.8/how-to-guides/new-user-guides/infrastructure-setup", + from: "/v2.8/pages-for-subheaders/infrastructure-setup", }, { - to: '/v2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler', - from: '/v2.8/pages-for-subheaders/install-cluster-autoscaler' + to: "/v2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler", + from: "/v2.8/pages-for-subheaders/install-cluster-autoscaler", }, { - to: '/v2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster', - from: '/v2.8/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster' + to: "/v2.8/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster", + from: "/v2.8/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster", }, { - to: '/v2.8/getting-started/installation-and-upgrade', - from: '/v2.8/pages-for-subheaders/installation-and-upgrade' + to: "/v2.8/getting-started/installation-and-upgrade", + from: "/v2.8/pages-for-subheaders/installation-and-upgrade", }, { - to: '/v2.8/getting-started/installation-and-upgrade/installation-references', - from: '/v2.8/pages-for-subheaders/installation-references' + to: "/v2.8/getting-started/installation-and-upgrade/installation-references", + from: "/v2.8/pages-for-subheaders/installation-references", }, { - to: '/v2.8/getting-started/installation-and-upgrade/installation-requirements', - from: '/v2.8/pages-for-subheaders/installation-requirements' + to: "/v2.8/getting-started/installation-and-upgrade/installation-requirements", + from: "/v2.8/pages-for-subheaders/installation-requirements", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/istio-setup-guide', - from: '/v2.8/pages-for-subheaders/istio-setup-guide' + to: "/v2.8/how-to-guides/advanced-user-guides/istio-setup-guide", + from: "/v2.8/pages-for-subheaders/istio-setup-guide", }, { - to: '/v2.8/integrations-in-rancher/istio/', - from: '/v2.8/pages-for-subheaders/istio' + to: "/v2.8/integrations-in-rancher/istio/", + from: "/v2.8/pages-for-subheaders/istio", }, { - to: '/v2.8/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide', - from: '/v2.8/pages-for-subheaders/k3s-hardening-guide' + to: "/v2.8/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide", + from: "/v2.8/pages-for-subheaders/k3s-hardening-guide", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-cluster-setup', - from: '/v2.8/pages-for-subheaders/kubernetes-cluster-setup' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-cluster-setup", + from: "/v2.8/pages-for-subheaders/kubernetes-cluster-setup", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup', - from: '/v2.8/pages-for-subheaders/kubernetes-clusters-in-rancher-setup' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup", + from: "/v2.8/pages-for-subheaders/kubernetes-clusters-in-rancher-setup", }, { - to: '/v2.8/troubleshooting/kubernetes-components', - from: '/v2.8/pages-for-subheaders/kubernetes-components' + to: "/v2.8/troubleshooting/kubernetes-components", + from: "/v2.8/pages-for-subheaders/kubernetes-components", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/', - from: '/v2.8/pages-for-subheaders/kubernetes-resources-setup' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/", + from: "/v2.8/pages-for-subheaders/kubernetes-resources-setup", }, { - to: '/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher', - from: '/v2.8/pages-for-subheaders/launch-kubernetes-with-rancher' + to: "/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher", + from: "/v2.8/pages-for-subheaders/launch-kubernetes-with-rancher", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller', - from: '/v2.8/pages-for-subheaders/load-balancer-and-ingress-controller' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller", + from: "/v2.8/pages-for-subheaders/load-balancer-and-ingress-controller", }, { - to: '/v2.8/integrations-in-rancher/logging/', - from: '/v2.8/pages-for-subheaders/logging' + to: "/v2.8/integrations-in-rancher/logging/", + from: "/v2.8/pages-for-subheaders/logging", }, { - to: '/v2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration', - from: '/v2.8/pages-for-subheaders/machine-configuration' + to: "/v2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration", + from: "/v2.8/pages-for-subheaders/machine-configuration", }, { - to: '/v2.8/how-to-guides/new-user-guides/manage-clusters', - from: '/v2.8/pages-for-subheaders/manage-clusters' + to: "/v2.8/how-to-guides/new-user-guides/manage-clusters", + from: "/v2.8/pages-for-subheaders/manage-clusters", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas', - from: '/v2.8/pages-for-subheaders/manage-project-resource-quotas' + to: "/v2.8/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas", + from: "/v2.8/pages-for-subheaders/manage-project-resource-quotas", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/manage-projects', - from: '/v2.8/pages-for-subheaders/manage-projects' + to: "/v2.8/how-to-guides/advanced-user-guides/manage-projects", + from: "/v2.8/pages-for-subheaders/manage-projects", }, { - to: '/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac', - from: '/v2.8/pages-for-subheaders/manage-role-based-access-control-rbac' + to: "/v2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac", + from: "/v2.8/pages-for-subheaders/manage-role-based-access-control-rbac", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/monitoring-alerting-guides', - from: '/v2.8/pages-for-subheaders/monitoring-alerting-guides' + to: "/v2.8/how-to-guides/advanced-user-guides/monitoring-alerting-guides", + from: "/v2.8/pages-for-subheaders/monitoring-alerting-guides", }, { - to: '/v2.8/integrations-in-rancher/monitoring-and-alerting', - from: '/v2.8/pages-for-subheaders/monitoring-and-alerting' + to: "/v2.8/integrations-in-rancher/monitoring-and-alerting", + from: "/v2.8/pages-for-subheaders/monitoring-and-alerting", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides', - from: '/v2.8/pages-for-subheaders/monitoring-v2-configuration-guides' + to: "/v2.8/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides", + from: "/v2.8/pages-for-subheaders/monitoring-v2-configuration-guides", }, { - to: '/v2.8/reference-guides/monitoring-v2-configuration', - from: '/v2.8/pages-for-subheaders/monitoring-v2-configuration' + to: "/v2.8/reference-guides/monitoring-v2-configuration", + from: "/v2.8/pages-for-subheaders/monitoring-v2-configuration", }, { - to: '/v2.8/how-to-guides/new-user-guides', - from: '/v2.8/pages-for-subheaders/new-user-guides' + to: "/v2.8/how-to-guides/new-user-guides", + from: "/v2.8/pages-for-subheaders/new-user-guides", }, { - to: '/v2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration', - from: '/v2.8/pages-for-subheaders/node-template-configuration' + to: "/v2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration", + from: "/v2.8/pages-for-subheaders/node-template-configuration", }, { - to: '/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix', - from: '/v2.8/pages-for-subheaders/nutanix' + to: "/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix", + from: "/v2.8/pages-for-subheaders/nutanix", }, { - to: '/v2.8/getting-started/installation-and-upgrade/other-installation-methods', - from: '/v2.8/pages-for-subheaders/other-installation-methods' + to: "/v2.8/getting-started/installation-and-upgrade/other-installation-methods", + from: "/v2.8/pages-for-subheaders/other-installation-methods", }, { - to: '/v2.8/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides', - from: '/v2.8/pages-for-subheaders/prometheus-federator-guides' + to: "/v2.8/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides", + from: "/v2.8/pages-for-subheaders/prometheus-federator-guides", }, { - to: '/v2.8/reference-guides/prometheus-federator', - from: '/v2.8/pages-for-subheaders/prometheus-federator' + to: "/v2.8/reference-guides/prometheus-federator", + from: "/v2.8/pages-for-subheaders/prometheus-federator", }, { - to: '/v2.8/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples', - from: '/v2.8/pages-for-subheaders/provisioning-storage-examples' + to: "/v2.8/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples", + from: "/v2.8/pages-for-subheaders/provisioning-storage-examples", }, { - to: '/v2.8/getting-started/quick-start-guides', - from: '/v2.8/pages-for-subheaders/quick-start-guides' + to: "/v2.8/getting-started/quick-start-guides", + from: "/v2.8/pages-for-subheaders/quick-start-guides", }, { - to: '/v2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy', - from: '/v2.8/pages-for-subheaders/rancher-behind-an-http-proxy' + to: "/v2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy", + from: "/v2.8/pages-for-subheaders/rancher-behind-an-http-proxy", }, { - to: '/v2.8/reference-guides/rancher-security/hardening-guides', - from: '/v2.8/pages-for-subheaders/rancher-hardening-guides' + to: "/v2.8/reference-guides/rancher-security/hardening-guides", + from: "/v2.8/pages-for-subheaders/rancher-hardening-guides", }, { - to: '/v2.8/reference-guides/best-practices/rancher-managed-clusters', - from: '/v2.8/pages-for-subheaders/rancher-managed-clusters' + to: "/v2.8/reference-guides/best-practices/rancher-managed-clusters", + from: "/v2.8/pages-for-subheaders/rancher-managed-clusters", }, { - to: '/v2.8/reference-guides/rancher-manager-architecture', - from: '/v2.8/pages-for-subheaders/rancher-manager-architecture' + to: "/v2.8/reference-guides/rancher-manager-architecture", + from: "/v2.8/pages-for-subheaders/rancher-manager-architecture", }, { - to: '/v2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker', - from: '/v2.8/pages-for-subheaders/rancher-on-a-single-node-with-docker' + to: "/v2.8/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker", + from: "/v2.8/pages-for-subheaders/rancher-on-a-single-node-with-docker", }, { - to: '/v2.8/reference-guides/rancher-security', - from: '/v2.8/pages-for-subheaders/rancher-security' + to: "/v2.8/reference-guides/rancher-security", + from: "/v2.8/pages-for-subheaders/rancher-security", }, { - to: '/v2.8/reference-guides/cluster-configuration/rancher-server-configuration', - from: '/v2.8/pages-for-subheaders/rancher-server-configuration' + to: "/v2.8/reference-guides/cluster-configuration/rancher-server-configuration", + from: "/v2.8/pages-for-subheaders/rancher-server-configuration", }, { - to: '/v2.8/reference-guides/best-practices/rancher-server', - from: '/v2.8/pages-for-subheaders/rancher-server' + to: "/v2.8/reference-guides/best-practices/rancher-server", + from: "/v2.8/pages-for-subheaders/rancher-server", }, { - to: '/v2.8/getting-started/installation-and-upgrade/resources', - from: '/v2.8/pages-for-subheaders/resources' + to: "/v2.8/getting-started/installation-and-upgrade/resources", + from: "/v2.8/pages-for-subheaders/resources", }, { - to: '/v2.8/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide', - from: '/v2.8/pages-for-subheaders/rke1-hardening-guide' + to: "/v2.8/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide", + from: "/v2.8/pages-for-subheaders/rke1-hardening-guide", }, { - to: '/v2.8/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide', - from: '/v2.8/pages-for-subheaders/rke2-hardening-guide' + to: "/v2.8/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide", + from: "/v2.8/pages-for-subheaders/rke2-hardening-guide", }, { - to: '/v2.8/reference-guides/rancher-security/selinux-rpm', - from: '/v2.8/pages-for-subheaders/selinux-rpm' + to: "/v2.8/reference-guides/rancher-security/selinux-rpm", + from: "/v2.8/pages-for-subheaders/selinux-rpm", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers', - from: '/v2.8/pages-for-subheaders/set-up-cloud-providers' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers", + from: "/v2.8/pages-for-subheaders/set-up-cloud-providers", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers', - from: '/v2.8/pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers", + from: "/v2.8/pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers", }, { - to: '/v2.8/reference-guides/single-node-rancher-in-docker', - from: '/v2.8/pages-for-subheaders/single-node-rancher-in-docker' + to: "/v2.8/reference-guides/single-node-rancher-in-docker", + from: "/v2.8/pages-for-subheaders/single-node-rancher-in-docker", }, { - to: '/v2.8/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes', - from: '/v2.8/pages-for-subheaders/use-existing-nodes' + to: "/v2.8/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes", + from: "/v2.8/pages-for-subheaders/use-existing-nodes", }, { - to: '/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider', - from: '/v2.8/pages-for-subheaders/use-new-nodes-in-an-infra-provider' + to: "/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider", + from: "/v2.8/pages-for-subheaders/use-new-nodes-in-an-infra-provider", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters', - from: '/v2.8/pages-for-subheaders/use-windows-clusters' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters", + from: "/v2.8/pages-for-subheaders/use-windows-clusters", }, { - to: '/v2.8/reference-guides/user-settings', - from: '/v2.8/pages-for-subheaders/user-settings' + to: "/v2.8/reference-guides/user-settings", + from: "/v2.8/pages-for-subheaders/user-settings", }, { - to: '/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere', - from: '/v2.8/pages-for-subheaders/vsphere' + to: "/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere", + from: "/v2.8/pages-for-subheaders/vsphere", }, { - to: '/v2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods', - from: '/v2.8/pages-for-subheaders/workloads-and-pods' + to: "/v2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods", + from: "/v2.8/pages-for-subheaders/workloads-and-pods", }, // Redirects for pages-for-subheaders removal [2.8] (end) - { // Redirects for pages-for-subheaders removal [latest] - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers', - from: '/pages-for-subheaders/about-provisioning-drivers' + { + // Redirects for pages-for-subheaders removal [latest] + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers", + from: "/pages-for-subheaders/about-provisioning-drivers", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates', - from: '/pages-for-subheaders/about-rke1-templates' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates", + from: "/pages-for-subheaders/about-rke1-templates", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/access-clusters', - from: '/pages-for-subheaders/access-clusters' + to: "/how-to-guides/new-user-guides/manage-clusters/access-clusters", + from: "/pages-for-subheaders/access-clusters", }, { - to: '/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration', - from: '/pages-for-subheaders/advanced-configuration' + to: "/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration", + from: "/pages-for-subheaders/advanced-configuration", }, { - to: '/how-to-guides/advanced-user-guides', - from: '/pages-for-subheaders/advanced-user-guides' + to: "/how-to-guides/advanced-user-guides", + from: "/pages-for-subheaders/advanced-user-guides", }, { - to: '/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install', - from: '/pages-for-subheaders/air-gapped-helm-cli-install' + to: "/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install", + from: "/pages-for-subheaders/air-gapped-helm-cli-install", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config', - from: '/pages-for-subheaders/authentication-config' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config", + from: "/pages-for-subheaders/authentication-config", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration', - from: '/pages-for-subheaders/authentication-permissions-and-global-configuration' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration", + from: "/pages-for-subheaders/authentication-permissions-and-global-configuration", }, { - to: '/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace', - from: '/pages-for-subheaders/aws-cloud-marketplace' + to: "/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace", + from: "/pages-for-subheaders/aws-cloud-marketplace", }, { - to: '/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery', - from: '/pages-for-subheaders/backup-restore-and-disaster-recovery' + to: "/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery", + from: "/pages-for-subheaders/backup-restore-and-disaster-recovery", }, { - to: '/reference-guides/backup-restore-configuration', - from: '/pages-for-subheaders/backup-restore-configuration' + to: "/reference-guides/backup-restore-configuration", + from: "/pages-for-subheaders/backup-restore-configuration", }, { - to: '/reference-guides/best-practices', - from: '/pages-for-subheaders/best-practices' + to: "/reference-guides/best-practices", + from: "/pages-for-subheaders/best-practices", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters', - from: '/pages-for-subheaders/checklist-for-production-ready-clusters' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters", + from: "/pages-for-subheaders/checklist-for-production-ready-clusters", }, { - to: '/how-to-guides/advanced-user-guides/cis-scan-guides', - from: '/pages-for-subheaders/cis-scan-guides' + to: "/how-to-guides/advanced-user-guides/cis-scan-guides", + from: "/pages-for-subheaders/cis-scan-guides", }, { - to: '/integrations-in-rancher/cis-scans', - from: '/pages-for-subheaders/cis-scans' + to: "/integrations-in-rancher/cis-scans", + from: "/pages-for-subheaders/cis-scans", }, { - to: '/reference-guides/cli-with-rancher', - from: '/pages-for-subheaders/cli-with-rancher' + to: "/reference-guides/cli-with-rancher", + from: "/pages-for-subheaders/cli-with-rancher", }, { - to: '/integrations-in-rancher/cloud-marketplace', - from: '/pages-for-subheaders/cloud-marketplace' + to: "/integrations-in-rancher/cloud-marketplace", + from: "/pages-for-subheaders/cloud-marketplace", }, { - to: '/reference-guides/cluster-configuration', - from: '/pages-for-subheaders/cluster-configuration' + to: "/reference-guides/cluster-configuration", + from: "/pages-for-subheaders/cluster-configuration", }, { - to: '/integrations-in-rancher/istio/configuration-options', - from: '/pages-for-subheaders/configuration-options' + to: "/integrations-in-rancher/istio/configuration-options", + from: "/pages-for-subheaders/configuration-options", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml', - from: '/pages-for-subheaders/configure-microsoft-ad-federation-service-saml' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml", + from: "/pages-for-subheaders/configure-microsoft-ad-federation-service-saml", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap', - from: '/pages-for-subheaders/configure-openldap' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap", + from: "/pages-for-subheaders/configure-openldap", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml', - from: '/pages-for-subheaders/configure-shibboleth-saml' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml", + from: "/pages-for-subheaders/configure-shibboleth-saml", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage', - from: '/pages-for-subheaders/create-kubernetes-persistent-storage' + to: "/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage", + from: "/pages-for-subheaders/create-kubernetes-persistent-storage", }, { - to: '/integrations-in-rancher/logging/custom-resource-configuration', - from: '/pages-for-subheaders/custom-resource-configuration' + to: "/integrations-in-rancher/logging/custom-resource-configuration", + from: "/pages-for-subheaders/custom-resource-configuration", }, { - to: '/getting-started/quick-start-guides/deploy-rancher-manager', - from: '/pages-for-subheaders/deploy-rancher-manager' + to: "/getting-started/quick-start-guides/deploy-rancher-manager", + from: "/pages-for-subheaders/deploy-rancher-manager", }, { - to: '/getting-started/quick-start-guides/deploy-workloads', - from: '/pages-for-subheaders/deploy-rancher-workloads' + to: "/getting-started/quick-start-guides/deploy-workloads", + from: "/pages-for-subheaders/deploy-rancher-workloads", }, { - to: '/reference-guides/cluster-configuration/downstream-cluster-configuration', - from: '/pages-for-subheaders/downstream-cluster-configuration' + to: "/reference-guides/cluster-configuration/downstream-cluster-configuration", + from: "/pages-for-subheaders/downstream-cluster-configuration", }, { - to: '/how-to-guides/advanced-user-guides/enable-experimental-features', - from: '/pages-for-subheaders/enable-experimental-features' + to: "/how-to-guides/advanced-user-guides/enable-experimental-features", + from: "/pages-for-subheaders/enable-experimental-features", }, { - to: '/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration', - from: '/pages-for-subheaders/gke-cluster-configuration' + to: "/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration", + from: "/pages-for-subheaders/gke-cluster-configuration", }, { - to: '/how-to-guides/new-user-guides/helm-charts-in-rancher', - from: '/pages-for-subheaders/helm-charts-in-rancher' + to: "/how-to-guides/new-user-guides/helm-charts-in-rancher", + from: "/pages-for-subheaders/helm-charts-in-rancher", }, { - to: '/how-to-guides/new-user-guides/kubernetes-resources-setup/horizontal-pod-autoscaler', - from: '/pages-for-subheaders/horizontal-pod-autoscaler' + to: "/how-to-guides/new-user-guides/kubernetes-resources-setup/horizontal-pod-autoscaler", + from: "/pages-for-subheaders/horizontal-pod-autoscaler", }, { - to: '/how-to-guides/new-user-guides/infrastructure-setup', - from: '/pages-for-subheaders/infrastructure-setup' + to: "/how-to-guides/new-user-guides/infrastructure-setup", + from: "/pages-for-subheaders/infrastructure-setup", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler', - from: '/pages-for-subheaders/install-cluster-autoscaler' + to: "/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler", + from: "/pages-for-subheaders/install-cluster-autoscaler", }, { - to: '/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster', - from: '/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster' + to: "/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster", + from: "/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster", }, { - to: '/getting-started/installation-and-upgrade', - from: '/pages-for-subheaders/installation-and-upgrade' + to: "/getting-started/installation-and-upgrade", + from: "/pages-for-subheaders/installation-and-upgrade", }, { - to: '/getting-started/installation-and-upgrade/installation-references', - from: '/pages-for-subheaders/installation-references' + to: "/getting-started/installation-and-upgrade/installation-references", + from: "/pages-for-subheaders/installation-references", }, { - to: '/getting-started/installation-and-upgrade/installation-requirements', - from: '/pages-for-subheaders/installation-requirements' + to: "/getting-started/installation-and-upgrade/installation-requirements", + from: "/pages-for-subheaders/installation-requirements", }, { - to: '/how-to-guides/advanced-user-guides/istio-setup-guide', - from: '/pages-for-subheaders/istio-setup-guide' + to: "/how-to-guides/advanced-user-guides/istio-setup-guide", + from: "/pages-for-subheaders/istio-setup-guide", }, { - to: '/integrations-in-rancher/istio', - from: '/pages-for-subheaders/istio' + to: "/integrations-in-rancher/istio", + from: "/pages-for-subheaders/istio", }, { - to: '/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide', - from: '/pages-for-subheaders/k3s-hardening-guide' + to: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide", + from: "/pages-for-subheaders/k3s-hardening-guide", }, { - to: '/how-to-guides/new-user-guides/kubernetes-cluster-setup', - from: '/pages-for-subheaders/kubernetes-cluster-setup' + to: "/how-to-guides/new-user-guides/kubernetes-cluster-setup", + from: "/pages-for-subheaders/kubernetes-cluster-setup", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup', - from: '/pages-for-subheaders/kubernetes-clusters-in-rancher-setup' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup", + from: "/pages-for-subheaders/kubernetes-clusters-in-rancher-setup", }, { - to: '/troubleshooting/kubernetes-components', - from: '/pages-for-subheaders/kubernetes-components' + to: "/troubleshooting/kubernetes-components", + from: "/pages-for-subheaders/kubernetes-components", }, { - to: '/how-to-guides/new-user-guides/kubernetes-resources-setup', - from: '/pages-for-subheaders/kubernetes-resources-setup' + to: "/how-to-guides/new-user-guides/kubernetes-resources-setup", + from: "/pages-for-subheaders/kubernetes-resources-setup", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher', - from: '/pages-for-subheaders/launch-kubernetes-with-rancher' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher", + from: "/pages-for-subheaders/launch-kubernetes-with-rancher", }, { - to: '/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller', - from: '/pages-for-subheaders/load-balancer-and-ingress-controller' + to: "/how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller", + from: "/pages-for-subheaders/load-balancer-and-ingress-controller", }, { - to: '/integrations-in-rancher/logging', - from: '/pages-for-subheaders/logging' + to: "/integrations-in-rancher/logging", + from: "/pages-for-subheaders/logging", }, { - to: '/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration', - from: '/pages-for-subheaders/machine-configuration' + to: "/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration", + from: "/pages-for-subheaders/machine-configuration", }, { - to: '/how-to-guides/new-user-guides/manage-clusters', - from: '/pages-for-subheaders/manage-clusters' + to: "/how-to-guides/new-user-guides/manage-clusters", + from: "/pages-for-subheaders/manage-clusters", }, { - to: '/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas', - from: '/pages-for-subheaders/manage-project-resource-quotas' + to: "/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas", + from: "/pages-for-subheaders/manage-project-resource-quotas", }, { - to: '/how-to-guides/advanced-user-guides/manage-projects', - from: '/pages-for-subheaders/manage-projects' + to: "/how-to-guides/advanced-user-guides/manage-projects", + from: "/pages-for-subheaders/manage-projects", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac', - from: '/pages-for-subheaders/manage-role-based-access-control-rbac' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac", + from: "/pages-for-subheaders/manage-role-based-access-control-rbac", }, { - to: '/how-to-guides/advanced-user-guides/monitoring-alerting-guides', - from: '/pages-for-subheaders/monitoring-alerting-guides' + to: "/how-to-guides/advanced-user-guides/monitoring-alerting-guides", + from: "/pages-for-subheaders/monitoring-alerting-guides", }, { - to: '/integrations-in-rancher/monitoring-and-alerting', - from: '/pages-for-subheaders/monitoring-and-alerting' + to: "/integrations-in-rancher/monitoring-and-alerting", + from: "/pages-for-subheaders/monitoring-and-alerting", }, { - to: '/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides', - from: '/pages-for-subheaders/monitoring-v2-configuration-guides' + to: "/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides", + from: "/pages-for-subheaders/monitoring-v2-configuration-guides", }, { - to: '/reference-guides/monitoring-v2-configuration', - from: '/pages-for-subheaders/monitoring-v2-configuration' + to: "/reference-guides/monitoring-v2-configuration", + from: "/pages-for-subheaders/monitoring-v2-configuration", }, { - to: '/how-to-guides/new-user-guides', - from: '/pages-for-subheaders/new-user-guides' + to: "/how-to-guides/new-user-guides", + from: "/pages-for-subheaders/new-user-guides", }, { - to: '/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration', - from: '/pages-for-subheaders/node-template-configuration' + to: "/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration", + from: "/pages-for-subheaders/node-template-configuration", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix', - from: '/pages-for-subheaders/nutanix' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix", + from: "/pages-for-subheaders/nutanix", }, { - to: '/getting-started/installation-and-upgrade/other-installation-methods', - from: '/pages-for-subheaders/other-installation-methods' + to: "/getting-started/installation-and-upgrade/other-installation-methods", + from: "/pages-for-subheaders/other-installation-methods", }, { - to: '/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides', - from: '/pages-for-subheaders/prometheus-federator-guides' + to: "/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides", + from: "/pages-for-subheaders/prometheus-federator-guides", }, { - to: '/reference-guides/prometheus-federator', - from: '/pages-for-subheaders/prometheus-federator' + to: "/reference-guides/prometheus-federator", + from: "/pages-for-subheaders/prometheus-federator", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples', - from: '/pages-for-subheaders/provisioning-storage-examples' + to: "/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples", + from: "/pages-for-subheaders/provisioning-storage-examples", }, { - to: '/getting-started/quick-start-guides', - from: '/pages-for-subheaders/quick-start-guides' + to: "/getting-started/quick-start-guides", + from: "/pages-for-subheaders/quick-start-guides", }, { - to: '/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy', - from: '/pages-for-subheaders/rancher-behind-an-http-proxy' + to: "/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy", + from: "/pages-for-subheaders/rancher-behind-an-http-proxy", }, { - to: '/reference-guides/rancher-security/hardening-guides', - from: '/pages-for-subheaders/rancher-hardening-guides' + to: "/reference-guides/rancher-security/hardening-guides", + from: "/pages-for-subheaders/rancher-hardening-guides", }, { - to: '/reference-guides/best-practices/rancher-managed-clusters', - from: '/pages-for-subheaders/rancher-managed-clusters' + to: "/reference-guides/best-practices/rancher-managed-clusters", + from: "/pages-for-subheaders/rancher-managed-clusters", }, { - to: '/reference-guides/rancher-manager-architecture', - from: '/pages-for-subheaders/rancher-manager-architecture' + to: "/reference-guides/rancher-manager-architecture", + from: "/pages-for-subheaders/rancher-manager-architecture", }, { - to: '/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker', - from: '/pages-for-subheaders/rancher-on-a-single-node-with-docker' + to: "/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker", + from: "/pages-for-subheaders/rancher-on-a-single-node-with-docker", }, { - to: '/reference-guides/rancher-security', - from: '/pages-for-subheaders/rancher-security' + to: "/reference-guides/rancher-security", + from: "/pages-for-subheaders/rancher-security", }, { - to: '/reference-guides/cluster-configuration/rancher-server-configuration', - from: '/pages-for-subheaders/rancher-server-configuration' + to: "/reference-guides/cluster-configuration/rancher-server-configuration", + from: "/pages-for-subheaders/rancher-server-configuration", }, { - to: '/reference-guides/best-practices/rancher-server', - from: '/pages-for-subheaders/rancher-server' + to: "/reference-guides/best-practices/rancher-server", + from: "/pages-for-subheaders/rancher-server", }, { - to: '/getting-started/installation-and-upgrade/resources', - from: '/pages-for-subheaders/resources' + to: "/getting-started/installation-and-upgrade/resources", + from: "/pages-for-subheaders/resources", }, { - to: '/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide', - from: '/pages-for-subheaders/rke1-hardening-guide' + to: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide", + from: "/pages-for-subheaders/rke1-hardening-guide", }, { - to: '/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide', - from: '/pages-for-subheaders/rke2-hardening-guide' + to: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide", + from: "/pages-for-subheaders/rke2-hardening-guide", }, { - to: '/reference-guides/rancher-security/selinux-rpm', - from: '/pages-for-subheaders/selinux-rpm' + to: "/reference-guides/rancher-security/selinux-rpm", + from: "/pages-for-subheaders/selinux-rpm", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers', - from: '/pages-for-subheaders/set-up-cloud-providers' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers", + from: "/pages-for-subheaders/set-up-cloud-providers", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers', - from: '/pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers", + from: "/pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers", }, { - to: '/reference-guides/single-node-rancher-in-docker', - from: '/pages-for-subheaders/single-node-rancher-in-docker' + to: "/reference-guides/single-node-rancher-in-docker", + from: "/pages-for-subheaders/single-node-rancher-in-docker", }, { - to: '/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes', - from: '/pages-for-subheaders/use-existing-nodes' + to: "/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes", + from: "/pages-for-subheaders/use-existing-nodes", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider', - from: '/pages-for-subheaders/use-new-nodes-in-an-infra-provider' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider", + from: "/pages-for-subheaders/use-new-nodes-in-an-infra-provider", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters', - from: '/pages-for-subheaders/use-windows-clusters' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters", + from: "/pages-for-subheaders/use-windows-clusters", }, { - to: '/reference-guides/user-settings', - from: '/pages-for-subheaders/user-settings' + to: "/reference-guides/user-settings", + from: "/pages-for-subheaders/user-settings", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere', - from: '/pages-for-subheaders/vsphere' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere", + from: "/pages-for-subheaders/vsphere", }, { - to: '/how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods', - from: '/pages-for-subheaders/workloads-and-pods' + to: "/how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods", + from: "/pages-for-subheaders/workloads-and-pods", }, // Redirects for pages-for-subheaders removal [latest] (end) - { // Redirects for dashboard v2.11 Preview (start) - to: '/v2.11/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth', - from: '/v2.11/admin-settings/authentication/google', + { + // Redirects for dashboard v2.11 Preview (start) + to: "/v2.11/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth", + from: "/v2.11/admin-settings/authentication/google", }, { - to: '/v2.11/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides', - from: '/v2.11/monitoring-alerting/configuration', + to: "/v2.11/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides", + from: "/v2.11/monitoring-alerting/configuration", }, { - to: '/v2.11/integrations-in-rancher/monitoring-and-alerting', - from: '/v2.11/monitoring-alerting', + to: "/v2.11/integrations-in-rancher/monitoring-and-alerting", + from: "/v2.11/monitoring-alerting", }, // Redirects for dashboard v2.11 Preview (end) - { // Redirects for dashboard#11114 (start) - to: '/v2.10/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth', - from: '/v2.10/admin-settings/authentication/google', + { + // Redirects for dashboard#11114 (start) + to: "/v2.10/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth", + from: "/v2.10/admin-settings/authentication/google", }, { - to: '/v2.10/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides', - from: '/v2.10/monitoring-alerting/configuration', + to: "/v2.10/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides", + from: "/v2.10/monitoring-alerting/configuration", }, { - to: '/v2.10/integrations-in-rancher/monitoring-and-alerting', - from: '/v2.10/monitoring-alerting', + to: "/v2.10/integrations-in-rancher/monitoring-and-alerting", + from: "/v2.10/monitoring-alerting", }, // Redirects for dashboard#11114 (end) - { // Redirects for dashboard#12040 (start) - to: '/v2.9/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth', - from: '/v2.9/admin-settings/authentication/google', + { + // Redirects for dashboard#12040 (start) + to: "/v2.9/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth", + from: "/v2.9/admin-settings/authentication/google", }, { - to: '/v2.9/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides', - from: '/v2.9/monitoring-alerting/configuration', + to: "/v2.9/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides", + from: "/v2.9/monitoring-alerting/configuration", }, { - to: '/v2.9/integrations-in-rancher/monitoring-and-alerting', - from: '/v2.9/monitoring-alerting', + to: "/v2.9/integrations-in-rancher/monitoring-and-alerting", + from: "/v2.9/monitoring-alerting", }, // Redirects for dashboard#12040 (end) - { // Redirects for dashboard#9970 - to: '/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/rke1-vs-rke2-differences', - from: '/v2.8/cluster-provisioning/rke-clusters/behavior-differences-between-rke1-and-rke2/' + { + // Redirects for dashboard#9970 + to: "/v2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/rke1-vs-rke2-differences", + from: "/v2.8/cluster-provisioning/rke-clusters/behavior-differences-between-rke1-and-rke2/", }, // Redirects for dashboard#9970 (end) - { // Redirects for restructure from PR #234 (start) - to: '/faq/general-faq', - from: '/faq' + { + // Redirects for restructure from PR #234 (start) + to: "/faq/general-faq", + from: "/faq", + }, + { + to: "/troubleshooting/general-troubleshooting", + from: "/troubleshooting", + }, + { + to: "/getting-started/overview", + from: "/getting-started/introduction/overview", }, { - to: '/troubleshooting/general-troubleshooting', - from: '/troubleshooting' + to: "/how-to-guides/advanced-user-guides/enable-experimental-features/rancher-on-arm64", + from: "/getting-started/installation-and-upgrade/advanced-options/enable-experimental-features/rancher-on-arm64", }, { - to: '/getting-started/overview', - from: '/getting-started/introduction/overview' + to: "/how-to-guides/advanced-user-guides/enable-experimental-features/unsupported-storage-drivers", + from: "/getting-started/installation-and-upgrade/advanced-options/enable-experimental-features/unsupported-storage-drivers", }, { - to: '/how-to-guides/advanced-user-guides/enable-experimental-features/rancher-on-arm64', - from: '/getting-started/installation-and-upgrade/advanced-options/enable-experimental-features/rancher-on-arm64' + to: "/how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features", + from: "/getting-started/installation-and-upgrade/advanced-options/enable-experimental-features/istio-traffic-management-features", }, { - to: '/how-to-guides/advanced-user-guides/enable-experimental-features/unsupported-storage-drivers', - from: '/getting-started/installation-and-upgrade/advanced-options/enable-experimental-features/unsupported-storage-drivers' + to: "/how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery", + from: "/getting-started/installation-and-upgrade/advanced-options/enable-experimental-features/continuous-delivery", }, { - to: '/how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features', - from: '/getting-started/installation-and-upgrade/advanced-options/enable-experimental-features/istio-traffic-management-features' + to: "/getting-started/installation-and-upgrade/installation-references/helm-chart-options", + from: "/reference-guides/installation-references/helm-chart-options", }, { - to: '/how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery', - from: '/getting-started/installation-and-upgrade/advanced-options/enable-experimental-features/continuous-delivery' + to: "/getting-started/installation-and-upgrade/installation-references/tls-settings", + from: "/reference-guides/installation-references/tls-settings", }, { - to: '/getting-started/installation-and-upgrade/installation-references/helm-chart-options', - from: '/reference-guides/installation-references/helm-chart-options' + to: "/getting-started/installation-and-upgrade/installation-references/feature-flags", + from: "/reference-guides/installation-references/feature-flags", }, { - to: '/getting-started/installation-and-upgrade/installation-references/tls-settings', - from: '/reference-guides/installation-references/tls-settings' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/manage-users-and-groups", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/manage-users-and-groups", }, { - to: '/getting-started/installation-and-upgrade/installation-references/feature-flags', - from: '/reference-guides/installation-references/feature-flags' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/create-local-users", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/create-local-users", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/manage-users-and-groups', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/manage-users-and-groups' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-google-oauth", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/create-local-users', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/create-local-users' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-active-directory", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-active-directory", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-google-oauth' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-freeipa", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-freeipa", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-active-directory', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-active-directory' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-azure-ad", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-freeipa', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-freeipa' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-github", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-azure-ad' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-oidc", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-keycloak-oidc", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-github' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-saml", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-keycloak-saml", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-oidc', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-keycloak-oidc' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-pingidentity", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-pingidentity", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-saml', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-keycloak-saml' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-okta-saml", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-okta-saml", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-pingidentity', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-pingidentity' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-ms-adfs-for-rancher", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/configure-microsoft-ad-federation-service-saml/configure-ms-adfs-for-rancher", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-okta-saml', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/authentication-config/configure-okta-saml' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-rancher-for-ms-adfs", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/configure-microsoft-ad-federation-service-saml/configure-rancher-for-ms-adfs", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-ms-adfs-for-rancher', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/configure-microsoft-ad-federation-service-saml/configure-ms-adfs-for-rancher' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/about-group-permissions", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/configure-shibboleth-saml/about-group-permissions", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-rancher-for-ms-adfs', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/configure-microsoft-ad-federation-service-saml/configure-rancher-for-ms-adfs' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/openldap-config-reference", + from: "/reference-guides/configure-openldap/openldap-config-reference", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/about-group-permissions', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-authentication/configure-shibboleth-saml/about-group-permissions' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/openldap-config-reference', - from: '/reference-guides/configure-openldap/openldap-config-reference' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/custom-roles", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/custom-roles", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/locked-roles", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/locked-roles", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/custom-roles', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/custom-roles' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-cluster-drivers", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-cluster-drivers", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/locked-roles', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/locked-roles' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-cluster-drivers', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-cluster-drivers' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/creator-permissions", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/creator-permissions", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/access-or-share-templates", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/access-or-share-templates", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/creator-permissions', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/creator-permissions' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/manage-rke1-templates", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/manage-rke1-templates", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/access-or-share-templates', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/access-or-share-templates' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/enforce-templates", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/enforce-templates", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/manage-rke1-templates', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/manage-rke1-templates' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/override-template-settings", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/override-template-settings", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/enforce-templates', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/enforce-templates' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/apply-templates", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/apply-templates", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/override-template-settings', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/override-template-settings' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/infrastructure", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/infrastructure", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/apply-templates', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/apply-templates' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/example-use-cases", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/example-use-cases", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/infrastructure', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/infrastructure' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/example-use-cases', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/example-use-cases' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry' + to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/custom-branding", + from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/custom-branding", }, { - to: '/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/custom-branding', - from: '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/custom-branding' + to: "/how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig", + from: "/how-to-guides/advanced-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig', - from: '/how-to-guides/advanced-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig' + to: "/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint", + from: "/how-to-guides/advanced-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint', - from: '/how-to-guides/advanced-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint' + to: "/how-to-guides/new-user-guides/manage-clusters/access-clusters/add-users-to-clusters", + from: "/how-to-guides/advanced-user-guides/manage-clusters/access-clusters/add-users-to-clusters", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/access-clusters/add-users-to-clusters', - from: '/how-to-guides/advanced-user-guides/manage-clusters/access-clusters/add-users-to-clusters' + to: "/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups", + from: "/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups', - from: '/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups' + to: "/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-persistent-storage", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-persistent-storage", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-persistent-storage', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-persistent-storage' + to: "/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/set-up-existing-storage' + to: "/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage' + to: "/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/use-external-ceph-driver", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/use-external-ceph-driver", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/use-external-ceph-driver', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/use-external-ceph-driver' + to: "/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-glusterfs-volumes", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-glusterfs-volumes", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-glusterfs-volumes', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-glusterfs-volumes' + to: "/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/install-iscsi-volumes", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/install-iscsi-volumes", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/install-iscsi-volumes', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/install-iscsi-volumes' + to: "/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/persistent-storage-in-amazon-ebs", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/provisioning-storage-examples/persistent-storage-in-amazon-ebs", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/persistent-storage-in-amazon-ebs', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/provisioning-storage-examples/persistent-storage-in-amazon-ebs' + to: "/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/nfs-storage", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/provisioning-storage-examples/nfs-storage", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/nfs-storage', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/provisioning-storage-examples/nfs-storage' + to: "/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/vsphere-storage", + from: "/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/provisioning-storage-examples/vsphere-storage", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/vsphere-storage', - from: '/how-to-guides/advanced-user-guides/manage-clusters/create-kubernetes-persistent-storage/provisioning-storage-examples/vsphere-storage' + to: "/how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces", + from: "/how-to-guides/advanced-user-guides/manage-clusters/projects-and-namespaces", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces', - from: '/how-to-guides/advanced-user-guides/manage-clusters/projects-and-namespaces' + to: "/how-to-guides/new-user-guides/manage-clusters/rotate-certificates", + from: "/how-to-guides/advanced-user-guides/manage-clusters/rotate-certificates", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/rotate-certificates', - from: '/how-to-guides/advanced-user-guides/manage-clusters/rotate-certificates' + to: "/how-to-guides/new-user-guides/manage-clusters/rotate-encryption-key", + from: "/how-to-guides/advanced-user-guides/manage-clusters/rotate-encryption-key", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/rotate-encryption-key', - from: '/how-to-guides/advanced-user-guides/manage-clusters/rotate-encryption-key' + to: "/how-to-guides/new-user-guides/manage-clusters/manage-cluster-templates", + from: [ + "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-cluster-templates", + "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-cluster-templates", + ], }, { - to: '/how-to-guides/new-user-guides/manage-clusters/manage-cluster-templates', - from: ['/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-cluster-templates', '/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-cluster-templates'] + to: "/how-to-guides/new-user-guides/manage-clusters/nodes-and-node-pools", + from: "/how-to-guides/advanced-user-guides/manage-clusters/nodes-and-node-pools", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/nodes-and-node-pools', - from: '/how-to-guides/advanced-user-guides/manage-clusters/nodes-and-node-pools' + to: "/how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes", + from: "/how-to-guides/advanced-user-guides/manage-clusters/clean-cluster-nodes", }, { - to: '/how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes', - from: '/how-to-guides/advanced-user-guides/manage-clusters/clean-cluster-nodes' + to: "/how-to-guides/new-user-guides/manage-clusters/add-a-pod-security-policy", + from: "/how-to-guides/advanced-user-guides/manage-clusters/add-a-pod-security-policy", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster' + to: "/how-to-guides/new-user-guides/manage-clusters/assign-pod-security-policies", + from: "/how-to-guides/advanced-user-guides/manage-clusters/assign-pod-security-policies", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-azure-cluster', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-azure-cluster' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/provision-kubernetes-clusters-in-vsphere', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/provision-kubernetes-clusters-in-vsphere' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-azure-cluster", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-azure-cluster", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-credentials', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-credentials' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/provision-kubernetes-clusters-in-vsphere", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/provision-kubernetes-clusters-in-vsphere", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-credentials", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-credentials", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/provision-kubernetes-clusters-in-aos', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/provision-kubernetes-clusters-in-aos' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/rke1-vs-rke2-differences', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/rke1-vs-rke2-differences' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/provision-kubernetes-clusters-in-aos", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/provision-kubernetes-clusters-in-aos", }, { - to: '/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/about-rancher-agents' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/rke1-vs-rke2-differences", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/rke1-vs-rke2-differences", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/azure-storageclass-configuration', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/azure-storageclass-configuration' + to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/about-rancher-agents", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/windows-linux-cluster-feature-parity', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/windows-linux-cluster-feature-parity' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/azure-storageclass-configuration", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/azure-storageclass-configuration", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/network-requirements-for-host-gateway', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/network-requirements-for-host-gateway' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/windows-linux-cluster-feature-parity", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/windows-linux-cluster-feature-parity", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/workload-migration-guidance', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/workload-migration-guidance' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/network-requirements-for-host-gateway", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/network-requirements-for-host-gateway", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/other-cloud-providers/amazon' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/workload-migration-guidance", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/workload-migration-guidance", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-amazon', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/other-cloud-providers/amazon", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/azure', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/other-cloud-providers/azure' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-amazon", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-amazon", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/other-cloud-providers/google-compute-engine' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/azure", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/other-cloud-providers/azure", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-in-tree-vsphere', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/vsphere/configure-in-tree-vsphere' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/other-cloud-providers/google-compute-engine", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-out-of-tree-vsphere', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/vsphere/configure-out-of-tree-vsphere' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-in-tree-vsphere", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/vsphere/configure-in-tree-vsphere", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-vsphere', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/vsphere/migrate-from-in-tree-to-out-of-tree' + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-out-of-tree-vsphere", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/vsphere/configure-out-of-tree-vsphere", }, { - to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-vsphere', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere', + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-vsphere", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/vsphere/migrate-from-in-tree-to-out-of-tree", }, - { to: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-vsphere', - from: '/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree' + { + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-vsphere", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-to-out-of-tree-vsphere", + }, + { + to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-vsphere", + from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/migrate-from-in-tree-to-out-of-tree", + }, + { + to: "/how-to-guides/new-user-guides/add-users-to-projects", + from: "/how-to-guides/advanced-user-guides/manage-projects/add-users-to-projects", }, { - to: '/how-to-guides/new-user-guides/add-users-to-projects', - from: '/how-to-guides/advanced-user-guides/manage-projects/add-users-to-projects' + to: "/how-to-guides/new-user-guides/manage-namespaces", + from: "/how-to-guides/advanced-user-guides/manage-projects/manage-namespaces", }, { - to: '/how-to-guides/new-user-guides/manage-namespaces', - from: '/how-to-guides/advanced-user-guides/manage-projects/manage-namespaces' + to: "/how-to-guides/advanced-user-guides/open-ports-with-firewalld", + from: "/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/open-ports-with-firewalld", }, { - to: '/how-to-guides/advanced-user-guides/open-ports-with-firewalld', - from: '/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/open-ports-with-firewalld' + to: "/how-to-guides/advanced-user-guides/tune-etcd-for-large-installs", + from: "/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/tune-etcd-for-large-installs", }, { - to: '/how-to-guides/advanced-user-guides/tune-etcd-for-large-installs', - from: '/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/tune-etcd-for-large-installs' + to: "/how-to-guides/advanced-user-guides/enable-api-audit-log", + from: "/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/enable-api-audit-log", }, { - to: '/how-to-guides/advanced-user-guides/enable-api-audit-log', - from: '/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/enable-api-audit-log' + to: "/how-to-guides/advanced-user-guides/configure-layer-7-nginx-load-balancer", + from: "/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/configure-layer-7-nginx-load-balancer", }, { - to: '/how-to-guides/advanced-user-guides/configure-layer-7-nginx-load-balancer', - from: '/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/configure-layer-7-nginx-load-balancer' + to: "/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements", + from: "/explanations/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements", }, { - to: '/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements', - from: '/explanations/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements' + to: "/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter", + from: "/explanations/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter", }, { - to: '/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter', - from: '/explanations/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter' + to: "/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter", + from: "/explanations/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter", }, { - to: '/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter', - from: '/explanations/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter' + to: "/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues", + from: "/explanations/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues", }, { - to: '/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues', - from: '/explanations/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues' + to: "/integrations-in-rancher/cloud-marketplace/supportconfig", + from: "/explanations/integrations-in-rancher/cloud-marketplace/supportconfig", }, { - to: '/integrations-in-rancher/cloud-marketplace/supportconfig', - from: '/explanations/integrations-in-rancher/cloud-marketplace/supportconfig' + to: "/integrations-in-rancher/cis-scans/configuration-reference", + from: "/explanations/integrations-in-rancher/cis-scans/configuration-reference", }, { - to: '/integrations-in-rancher/cis-scans/configuration-reference', - from: '/explanations/integrations-in-rancher/cis-scans/configuration-reference' + to: "/integrations-in-rancher/cis-scans/rbac-for-cis-scans", + from: "/explanations/integrations-in-rancher/cis-scans/rbac-for-cis-scans", }, { - to: '/integrations-in-rancher/cis-scans/rbac-for-cis-scans', - from: '/explanations/integrations-in-rancher/cis-scans/rbac-for-cis-scans' + to: "/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests", + from: "/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests", }, { - to: '/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests', - from: '/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests' + to: "/integrations-in-rancher/cis-scans/custom-benchmark", + from: "/explanations/integrations-in-rancher/cis-scans/custom-benchmark", }, { - to: '/integrations-in-rancher/cis-scans/custom-benchmark', - from: '/explanations/integrations-in-rancher/cis-scans/custom-benchmark' + to: "/integrations-in-rancher/fleet/architecture", + from: "/explanations/integrations-in-rancher/fleet-gitops-at-scale/architecture", }, { - to: '/integrations-in-rancher/fleet/architecture', - from: '/explanations/integrations-in-rancher/fleet-gitops-at-scale/architecture' + to: "/integrations-in-rancher/fleet/windows-support", + from: "/explanations/integrations-in-rancher/fleet-gitops-at-scale/windows-support", }, { - to: '/integrations-in-rancher/fleet/windows-support', - from: '/explanations/integrations-in-rancher/fleet-gitops-at-scale/windows-support' + to: "/integrations-in-rancher/fleet/use-fleet-behind-a-proxy", + from: "/explanations/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy", }, { - to: '/integrations-in-rancher/fleet/use-fleet-behind-a-proxy', - from: '/explanations/integrations-in-rancher/fleet-gitops-at-scale/use-fleet-behind-a-proxy' + to: "/integrations-in-rancher/harvester", + from: "/explanations/integrations-in-rancher/harvester", }, { - to: '/integrations-in-rancher/harvester', - from: '/explanations/integrations-in-rancher/harvester' + to: "/integrations-in-rancher/istio/cpu-and-memory-allocations", + from: "/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations", }, { - to: '/integrations-in-rancher/istio/cpu-and-memory-allocations', - from: '/explanations/integrations-in-rancher/istio/cpu-and-memory-allocations' + to: "/integrations-in-rancher/istio/rbac-for-istio", + from: "/explanations/integrations-in-rancher/istio/rbac-for-istio", }, { - to: '/integrations-in-rancher/istio/rbac-for-istio', - from: '/explanations/integrations-in-rancher/istio/rbac-for-istio' + to: "/integrations-in-rancher/istio/disable-istio", + from: "/explanations/integrations-in-rancher/istio/disable-istio", }, { - to: '/integrations-in-rancher/istio/disable-istio', - from: '/explanations/integrations-in-rancher/istio/disable-istio' + to: "/integrations-in-rancher/istio/configuration-options/pod-security-policies", + from: "/explanations/integrations-in-rancher/istio/configuration-options/pod-security-policies", }, { - to: '/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations', - from: '/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations' + to: "/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations", + from: "/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations", }, { - to: '/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster', - from: '/explanations/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster' + to: "/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster", + from: "/explanations/integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster", }, { - to: '/integrations-in-rancher/istio/configuration-options/project-network-isolation', - from: '/explanations/integrations-in-rancher/istio/configuration-options/project-network-isolation' + to: "/integrations-in-rancher/istio/configuration-options/project-network-isolation", + from: "/explanations/integrations-in-rancher/istio/configuration-options/project-network-isolation", }, { - to: '/integrations-in-rancher/longhorn', - from: '/explanations/integrations-in-rancher/longhorn' + to: "/integrations-in-rancher/longhorn", + from: "/explanations/integrations-in-rancher/longhorn", }, { - to: '/integrations-in-rancher/logging/logging-architecture', - from: '/explanations/integrations-in-rancher/logging/logging-architecture' + to: "/integrations-in-rancher/logging/logging-architecture", + from: "/explanations/integrations-in-rancher/logging/logging-architecture", }, { - to: '/integrations-in-rancher/logging/rbac-for-logging', - from: '/explanations/integrations-in-rancher/logging/rbac-for-logging' + to: "/integrations-in-rancher/logging/rbac-for-logging", + from: "/explanations/integrations-in-rancher/logging/rbac-for-logging", }, { - to: '/integrations-in-rancher/logging/logging-helm-chart-options', - from: '/explanations/integrations-in-rancher/logging/logging-helm-chart-options' + to: "/integrations-in-rancher/logging/logging-helm-chart-options", + from: "/explanations/integrations-in-rancher/logging/logging-helm-chart-options", }, { - to: '/integrations-in-rancher/logging/taints-and-tolerations', - from: '/explanations/integrations-in-rancher/logging/taints-and-tolerations' + to: "/integrations-in-rancher/logging/taints-and-tolerations", + from: "/explanations/integrations-in-rancher/logging/taints-and-tolerations", }, { - to: '/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows', - from: '/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows' + to: "/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows", + from: "/explanations/integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows", }, { - to: '/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs', - from: '/explanations/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs' + to: "/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs", + from: "/explanations/integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs", }, { - to: '/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works', - from: '/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works' + to: "/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works", + from: "/explanations/integrations-in-rancher/monitoring-and-alerting/how-monitoring-works", }, { - to: '/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring', - from: '/explanations/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring' + to: "/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring", + from: "/explanations/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring", }, { - to: '/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards', - from: '/explanations/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards' + to: "/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards", + from: "/explanations/integrations-in-rancher/monitoring-and-alerting/built-in-dashboards", }, { - to: '/integrations-in-rancher/monitoring-and-alerting/windows-support', - from: '/explanations/integrations-in-rancher/monitoring-and-alerting/windows-support' + to: "/integrations-in-rancher/monitoring-and-alerting/windows-support", + from: "/explanations/integrations-in-rancher/monitoring-and-alerting/windows-support", }, { - to: '/integrations-in-rancher/monitoring-and-alerting/promql-expressions', - from: '/explanations/integrations-in-rancher/monitoring-and-alerting/promql-expressions' + to: "/integrations-in-rancher/monitoring-and-alerting/promql-expressions", + from: "/explanations/integrations-in-rancher/monitoring-and-alerting/promql-expressions", }, { - to: '/integrations-in-rancher/neuvector', - from: '/explanations/integrations-in-rancher/neuvector' + to: "/integrations-in-rancher/neuvector", + from: "/explanations/integrations-in-rancher/neuvector", }, // Redirects for restructure from PR #234 (end) { - to: '/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27', - from: '/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25' + to: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24", + from: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24", }, { - to: '/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale', - from: '/reference-guides/best-practices/rancher-server/tips-for-scaling-rancher' + to: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", + from: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25", + }, + { + to: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24", + from: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24", + }, + { + to: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", + from: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25", + }, + { + to: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24", + from: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24", + }, + { + to: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", + from: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25", + }, + { + to: "/reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale", + from: "/reference-guides/best-practices/rancher-server/tips-for-scaling-rancher", }, // Redirects for restructure from PR #1147 (start) { - to: '/v2.8/api/v3-rancher-api-guide', - from: ['/v2.8/reference-guides/about-the-api', '/v2.8/pages-for-subheaders/about-the-api'] + to: "/v2.8/api/v3-rancher-api-guide", + from: [ + "/v2.8/reference-guides/about-the-api", + "/v2.8/pages-for-subheaders/about-the-api", + ], }, { - to: '/v2.8/api/api-tokens', - from: '/v2.8/reference-guides/about-the-api/api-tokens' + to: "/v2.8/api/api-tokens", + from: "/v2.8/reference-guides/about-the-api/api-tokens", }, { - to: '/api/v3-rancher-api-guide', - from: ['/reference-guides/about-the-api', '/pages-for-subheaders/about-the-api'] + to: "/api/v3-rancher-api-guide", + from: [ + "/reference-guides/about-the-api", + "/pages-for-subheaders/about-the-api", + ], }, { - to: '/api/api-tokens', - from: '/reference-guides/about-the-api/api-tokens' - } + to: "/api/api-tokens", + from: "/reference-guides/about-the-api/api-tokens", + }, // Redirects for restructure from PR #1147 (end) + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run", + }, + { + from: "/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides", + to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides", + }, ], }, ], ], scripts: [ { - src: 'https://cdn.cookielaw.org/scripttemplates/otSDKStub.js', - type:'text/javascript', - charset: 'UTF-8', - 'data-domain-script': '0f98beb0-fc4c-417d-a42e-564e2cae42d2', - async: true + src: "https://cdn.cookielaw.org/scripttemplates/otSDKStub.js", + type: "text/javascript", + charset: "UTF-8", + "data-domain-script": "0f98beb0-fc4c-417d-a42e-564e2cae42d2", + async: true, }, { - src: '/scripts/optanonwrapper.js', - type:'text/javascript', - async: true + src: "/scripts/optanonwrapper.js", + type: "text/javascript", + async: true, }, ], }; diff --git a/sidebars.js b/sidebars.js index e8104011c45..a62839cdc35 100644 --- a/sidebars.js +++ b/sidebars.js @@ -18,26 +18,25 @@ const sidebars = { // But you can create a sidebar manually tutorialSidebar: [ - - 'rancher-manager', + "rancher-manager", { - type: 'category', - label: 'Getting Started', + type: "category", + label: "Getting Started", items: [ "getting-started/overview", { - type: 'category', - label: 'Quick Start Guides', + type: "category", + label: "Quick Start Guides", link: { - type: 'doc', + type: "doc", id: "getting-started/quick-start-guides/quick-start-guides", }, items: [ { - type: 'category', - label: 'Deploying Rancher Server', + type: "category", + label: "Deploying Rancher Server", link: { - type: 'doc', + type: "doc", id: "getting-started/quick-start-guides/deploy-rancher-manager/deploy-rancher-manager", }, items: [ @@ -52,63 +51,62 @@ const sidebars = { "getting-started/quick-start-guides/deploy-rancher-manager/equinix-metal", "getting-started/quick-start-guides/deploy-rancher-manager/outscale-qs", "getting-started/quick-start-guides/deploy-rancher-manager/helm-cli", - - ] + ], }, "getting-started/quick-start-guides/deploy-rancher-manager/prime", { - type: 'category', - label: 'Deploying Workloads', + type: "category", + label: "Deploying Workloads", link: { - type: 'doc', + type: "doc", id: "getting-started/quick-start-guides/deploy-workloads/deploy-workloads", }, items: [ "getting-started/quick-start-guides/deploy-workloads/workload-ingress", "getting-started/quick-start-guides/deploy-workloads/nodeports", - ] - } - ] + ], + }, + ], }, { - type: 'category', - label: 'Installation and Upgrade', + type: "category", + label: "Installation and Upgrade", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/installation-and-upgrade", }, items: [ { - type: 'category', - label: 'Installation Requirements', + type: "category", + label: "Installation Requirements", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/installation-requirements/installation-requirements", }, items: [ "getting-started/installation-and-upgrade/installation-requirements/install-docker", "getting-started/installation-and-upgrade/installation-requirements/dockershim", "getting-started/installation-and-upgrade/installation-requirements/port-requirements", - ] + ], }, { - type: 'category', - label: 'Installation References', + type: "category", + label: "Installation References", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/installation-references/installation-references", }, items: [ "getting-started/installation-and-upgrade/installation-references/helm-chart-options", "getting-started/installation-and-upgrade/installation-references/tls-settings", - "getting-started/installation-and-upgrade/installation-references/feature-flags" - ] + "getting-started/installation-and-upgrade/installation-references/feature-flags", + ], }, { - type: 'category', - label: 'Install/Upgrade on a Kubernetes Cluster', + type: "category", + label: "Install/Upgrade on a Kubernetes Cluster", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster", }, items: [ @@ -119,21 +117,21 @@ const sidebars = { "getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks", "getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-gke", "getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting", - ] + ], }, { - type: 'category', - label: 'Other Installation Methods', + type: "category", + label: "Other Installation Methods", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/other-installation-methods/other-installation-methods", }, items: [ { - type: 'category', - label: 'Air-Gapped Helm CLI Install', + type: "category", + label: "Air-Gapped Helm CLI Install", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install", }, items: [ @@ -142,41 +140,41 @@ const sidebars = { "getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-kubernetes", "getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha", "getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands", - ] + ], }, { - type: 'category', - label: 'Rancher on a Single Node with Docker', + type: "category", + label: "Rancher on a Single Node with Docker", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker", }, items: [ "getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher", "getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/roll-back-docker-installed-rancher", "getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/certificate-troubleshooting", - ] + ], }, { - type: 'category', - label: 'Rancher Behind an HTTP Proxy', + type: "category", + label: "Rancher Behind an HTTP Proxy", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/rancher-behind-an-http-proxy", }, items: [ "getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/set-up-infrastructure", "getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes", "getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher", - ] - } + ], + }, ], }, { - type: 'category', - label: 'Resources', + type: "category", + label: "Resources", link: { - type: 'doc', + type: "doc", id: "getting-started/installation-and-upgrade/resources/resources", }, items: [ @@ -188,40 +186,40 @@ const sidebars = { "getting-started/installation-and-upgrade/resources/update-rancher-certificate", "getting-started/installation-and-upgrade/resources/bootstrap-password", "getting-started/installation-and-upgrade/resources/local-system-charts", - ] + ], }, "getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes", "getting-started/installation-and-upgrade/upgrade-kubernetes-without-upgrading-rancher", - ] - } - ] + ], + }, + ], }, { - type: 'category', - label: 'How-to Guides', + type: "category", + label: "How-to Guides", items: [ { - type: 'category', - label: 'New User Guides', + type: "category", + label: "New User Guides", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/new-user-guides", }, items: [ { - type: 'category', - label: 'Authentication, Permissions, and Global Configuration', + type: "category", + label: "Authentication, Permissions, and Global Configuration", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-permissions-and-global-configuration", }, items: [ { - type: 'category', - label: 'Configuring Authentication', + type: "category", + label: "Configuring Authentication", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/authentication-config", }, items: [ @@ -238,47 +236,47 @@ const sidebars = { "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-okta-saml", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-generic-oidc", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-amazon-cognito", - ] + ], }, { - type: 'category', - label: 'Configuring OpenLDAP', + type: "category", + label: "Configuring OpenLDAP", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/configure-openldap", }, items: [ "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/openldap-config-reference", - ] + ], }, { - type: 'category', - label: 'Configuring Microsoft AD Federation Service (SAML)', + type: "category", + label: "Configuring Microsoft AD Federation Service (SAML)", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-microsoft-ad-federation-service-saml", }, items: [ "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-ms-adfs-for-rancher", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-rancher-for-ms-adfs", - ] + ], }, { - type: 'category', - label: 'Configuring Shibboleth (SAML)', + type: "category", + label: "Configuring Shibboleth (SAML)", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/configure-shibboleth-saml", }, items: [ "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/about-group-permissions", - ] + ], }, { - type: 'category', - label: 'Managing Role-Based Access Control (RBAC)', + type: "category", + label: "Managing Role-Based Access Control (RBAC)", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/manage-role-based-access-control-rbac", }, items: [ @@ -286,26 +284,26 @@ const sidebars = { "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/custom-roles", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/locked-roles", - ] + ], }, "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/jwt-authentication", { - type: 'category', - label: 'About Provisioning Drivers', + type: "category", + label: "About Provisioning Drivers", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers", }, items: [ "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-cluster-drivers", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers", - ] + ], }, { - type: 'category', - label: 'About RKE1 Templates', + type: "category", + label: "About RKE1 Templates", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/about-rke1-templates", }, items: [ @@ -317,51 +315,51 @@ const sidebars = { "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/apply-templates", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/infrastructure", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/example-use-cases", - ] + ], }, "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry", "how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/custom-branding", - ] + ], }, { - type: 'category', - label: 'Cluster Administration', + type: "category", + label: "Cluster Administration", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/manage-clusters/manage-clusters", }, items: [ { - type: 'category', - label: 'Access Clusters', + type: "category", + label: "Access Clusters", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/manage-clusters/access-clusters/access-clusters", }, items: [ "how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig", "how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint", "how-to-guides/new-user-guides/manage-clusters/access-clusters/add-users-to-clusters", - ] + ], }, { - type: 'category', - label: 'Install Cluster Autoscaler', + type: "category", + label: "Install Cluster Autoscaler", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/install-cluster-autoscaler", }, items: [ "how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups", - ] + ], }, { - type: 'category', - label: 'Create Kubernetes Persistent Storage', + type: "category", + label: "Create Kubernetes Persistent Storage", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage", }, items: [ @@ -371,20 +369,20 @@ const sidebars = { "how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/use-external-ceph-driver", "how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/about-glusterfs-volumes", "how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/install-iscsi-volumes", - ] + ], }, { - type: 'category', - label: 'Provisioning Storage Examples', + type: "category", + label: "Provisioning Storage Examples", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/provisioning-storage-examples", }, items: [ "how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/persistent-storage-in-amazon-ebs", "how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/nfs-storage", "how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/vsphere-storage", - ] + ], }, "how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces", @@ -397,13 +395,17 @@ const sidebars = { "how-to-guides/new-user-guides/manage-clusters/nodes-and-node-pools", "how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes", - ] + + "how-to-guides/new-user-guides/manage-clusters/add-a-pod-security-policy", + + "how-to-guides/new-user-guides/manage-clusters/assign-pod-security-policies", + ], }, { - type: 'category', - label: 'Setting up a Kubernetes Cluster for Rancher Server', + type: "category", + label: "Setting up a Kubernetes Cluster for Rancher Server", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-cluster-setup/kubernetes-cluster-setup", }, items: [ @@ -414,10 +416,10 @@ const sidebars = { ], }, { - type: 'category', - label: 'Infrastructure Setup', + type: "category", + label: "Infrastructure Setup", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/infrastructure-setup/infrastructure-setup", }, items: [ @@ -431,19 +433,19 @@ const sidebars = { ], }, { - type: 'category', - label: 'Kubernetes Clusters in Rancher Setup', + type: "category", + label: "Kubernetes Clusters in Rancher Setup", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup", }, items: [ "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters", { - type: 'category', - label: 'Checklist for Production-Ready Clusters', + type: "category", + label: "Checklist for Production-Ready Clusters", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters", }, items: [ @@ -452,10 +454,10 @@ const sidebars = { ], }, { - type: 'category', - label: 'Setting up Clusters from Hosted Kubernetes Providers', + type: "category", + label: "Setting up Clusters from Hosted Kubernetes Providers", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/set-up-clusters-from-hosted-kubernetes-providers", }, items: [ @@ -465,27 +467,27 @@ const sidebars = { "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/alibaba", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/tencent", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/huawei", - ] + ], }, { - type: 'category', - label: 'Launching Kubernetes on Windows Clusters', + type: "category", + label: "Launching Kubernetes on Windows Clusters", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters", }, items: [ "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/azure-storageclass-configuration", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/windows-linux-cluster-feature-parity", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/network-requirements-for-host-gateway", - "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/workload-migration-guidance" - ] + "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/workload-migration-guidance", + ], }, { - type: 'category', - label: 'Setting up Cloud Providers', + type: "category", + label: "Setting up Cloud Providers", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers", }, items: [ @@ -494,33 +496,33 @@ const sidebars = { "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/google-compute-engine", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-in-tree-vsphere", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/configure-out-of-tree-vsphere", - ] + ], }, { - type: 'category', - label: 'Migrate to an Out-of-tree Cloud Provider', + type: "category", + label: "Migrate to an Out-of-tree Cloud Provider", items: [ "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-amazon", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-vsphere", "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-azure", - ] + ], }, "how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters", - ] + ], }, { - type: 'category', - label: 'Launching Kubernetes with Rancher', + type: "category", + label: "Launching Kubernetes with Rancher", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher", }, items: [ { - type: 'category', - label: 'Launching New Nodes in an Infra Provider', + type: "category", + label: "Launching New Nodes in an Infra Provider", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider", }, items: [ @@ -530,50 +532,50 @@ const sidebars = { "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-azure-cluster", { - type: 'category', - label: 'Creating a VMware vSphere Cluster', + type: "category", + label: "Creating a VMware vSphere Cluster", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/vsphere", }, items: [ "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/provision-kubernetes-clusters-in-vsphere", "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-credentials", "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/create-a-vm-template", - "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/shutdown-vm" - ] + "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/shutdown-vm", + ], }, { - type: 'category', - label: 'Creating a Nutanix AOS Cluster', + type: "category", + label: "Creating a Nutanix AOS Cluster", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/nutanix", }, items: [ "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/provision-kubernetes-clusters-in-aos", - ] - } - ] + ], + }, + ], }, "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/rke1-vs-rke2-differences", "how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents", - ] + ], }, { - type: 'category', - label: 'Kubernetes Resources Setup', + type: "category", + label: "Kubernetes Resources Setup", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-resources-setup/kubernetes-resources-setup", }, items: [ { - type: 'category', - label: 'Workloads and Pods', + type: "category", + label: "Workloads and Pods", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/workloads-and-pods", }, items: [ @@ -581,13 +583,13 @@ const sidebars = { "how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/roll-back-workloads", "how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/upgrade-workloads", "how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/add-a-sidecar", - ] + ], }, { - type: 'category', - label: 'Horizontal Pod Autoscaler', + type: "category", + label: "Horizontal Pod Autoscaler", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-resources-setup/horizontal-pod-autoscaler/horizontal-pod-autoscaler", }, items: [ @@ -595,20 +597,20 @@ const sidebars = { "how-to-guides/new-user-guides/kubernetes-resources-setup/horizontal-pod-autoscaler/manage-hpas-with-ui", "how-to-guides/new-user-guides/kubernetes-resources-setup/horizontal-pod-autoscaler/manage-hpas-with-kubectl", "how-to-guides/new-user-guides/kubernetes-resources-setup/horizontal-pod-autoscaler/test-hpas-with-kubectl", - ] + ], }, { - type: 'category', - label: 'Load Balancer and Ingress Controller', + type: "category", + label: "Load Balancer and Ingress Controller", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/load-balancer-and-ingress-controller", }, items: [ "how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing", "how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/add-ingresses", "how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/ingress-configuration", - ] + ], }, "how-to-guides/new-user-guides/kubernetes-resources-setup/create-services", @@ -619,25 +621,25 @@ const sidebars = { "how-to-guides/new-user-guides/kubernetes-resources-setup/secrets", "how-to-guides/new-user-guides/kubernetes-resources-setup/kubernetes-and-docker-registries", - ] + ], }, { - type: 'category', - label: 'Helm Charts and Apps', + type: "category", + label: "Helm Charts and Apps", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher", }, items: [ "how-to-guides/new-user-guides/helm-charts-in-rancher/create-apps", - "how-to-guides/new-user-guides/helm-charts-in-rancher/oci-repositories" - ] + "how-to-guides/new-user-guides/helm-charts-in-rancher/oci-repositories", + ], }, { - type: 'category', - label: 'Backup, Restore, and Disaster Recovery', + type: "category", + label: "Backup, Restore, and Disaster Recovery", link: { - type: 'doc', + type: "doc", id: "how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/backup-restore-and-disaster-recovery", }, items: [ @@ -649,34 +651,34 @@ const sidebars = { "how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-docker-installed-rancher", "how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters", "how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup", - ] + ], }, "how-to-guides/new-user-guides/add-users-to-projects", "how-to-guides/new-user-guides/manage-namespaces", - ] + ], }, { - type: 'category', - label: 'Advanced User Guides', + type: "category", + label: "Advanced User Guides", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/advanced-user-guides", }, items: [ { - type: 'category', - label: 'Project Administration', + type: "category", + label: "Project Administration", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/manage-projects/manage-projects", }, items: [ { - type: 'category', - label: 'Project Resource Quotas', + type: "category", + label: "Project Resource Quotas", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas/manage-project-resource-quotas", }, items: [ @@ -684,15 +686,15 @@ const sidebars = { "how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas/override-default-limit-in-namespaces", "how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas/set-container-default-resource-limits", "how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas/resource-quota-types", - ] - } - ] + ], + }, + ], }, { - type: 'category', - label: 'Monitoring/Alerting Guides', + type: "category", + label: "Monitoring/Alerting Guides", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/monitoring-alerting-guides/monitoring-alerting-guides", }, items: [ @@ -703,10 +705,10 @@ const sidebars = { "how-to-guides/advanced-user-guides/monitoring-alerting-guides/create-persistent-grafana-dashboard", "how-to-guides/advanced-user-guides/monitoring-alerting-guides/debug-high-memory-usage", { - type: 'category', - label: 'Prometheus Federator Guides', + type: "category", + label: "Prometheus Federator Guides", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/prometheus-federator-guides", }, items: [ @@ -714,39 +716,39 @@ const sidebars = { "how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/uninstall-prometheus-federator", "how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/customize-grafana-dashboards", "how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/set-up-workloads", - "how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/project-monitors" - ] - } - ] + "how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/project-monitors", + ], + }, + ], }, { - type: 'category', - label: 'Monitoring Configuration Guides', + type: "category", + label: "Monitoring Configuration Guides", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/monitoring-v2-configuration-guides", }, items: [ { - type: 'category', - label: 'Advanced Configuration', + type: "category", + label: "Advanced Configuration", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration/advanced-configuration", }, items: [ "how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration/alertmanager", "how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration/prometheus", "how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration/prometheusrules", - ] - } - ] + ], + }, + ], }, { - type: 'category', - label: 'Istio Setup Guides', + type: "category", + label: "Istio Setup Guides", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/istio-setup-guide/istio-setup-guide", }, items: [ @@ -756,32 +758,32 @@ const sidebars = { "how-to-guides/advanced-user-guides/istio-setup-guide/set-up-istio-gateway", "how-to-guides/advanced-user-guides/istio-setup-guide/set-up-traffic-management", "how-to-guides/advanced-user-guides/istio-setup-guide/generate-and-view-traffic", - ] + ], }, { - type: 'category', - label: 'CIS Scan Guides', + type: "category", + label: "Compliance Scan Guides", link: { - type: 'doc', - id: "how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides", + type: "doc", + id: "how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides", }, items: [ - "how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark", - "how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark", - "how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan", - "how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule", - "how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests", - "how-to-guides/advanced-user-guides/cis-scan-guides/view-reports", - "how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark", - "how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule", - "how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run", - ] + "how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance", + "how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance", + "how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan", + "how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule", + "how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests", + "how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports", + "how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance", + "how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule", + "how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run", + ], }, { - type: 'category', - label: 'Enabling Experimental Features', + type: "category", + label: "Enabling Experimental Features", link: { - type: 'doc', + type: "doc", id: "how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features", }, items: [ @@ -791,7 +793,7 @@ const sidebars = { "how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features", "how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery", "how-to-guides/advanced-user-guides/enable-experimental-features/cluster-role-aggregation", - ] + ], }, "how-to-guides/advanced-user-guides/open-ports-with-firewalld", "how-to-guides/advanced-user-guides/tune-etcd-for-large-installs", @@ -801,41 +803,41 @@ const sidebars = { "how-to-guides/advanced-user-guides/enable-cluster-agent-scheduling-customization", "how-to-guides/advanced-user-guides/configure-layer-7-nginx-load-balancer", "how-to-guides/advanced-user-guides/configure-oidc-provider", - ] - } - ] + ], + }, + ], }, { - type: 'category', - label: 'Reference Guides', + type: "category", + label: "Reference Guides", items: [ { - type: 'category', - label: 'Best Practice Guides', + type: "category", + label: "Best Practice Guides", link: { - type: 'doc', + type: "doc", id: "reference-guides/best-practices/best-practices", }, items: [ { - type: 'category', - label: 'Rancher Server', + type: "category", + label: "Rancher Server", link: { - type: 'doc', + type: "doc", id: "reference-guides/best-practices/rancher-server/rancher-server", }, items: [ "reference-guides/best-practices/rancher-server/on-premises-rancher-in-vsphere", "reference-guides/best-practices/rancher-server/rancher-deployment-strategy", "reference-guides/best-practices/rancher-server/tips-for-running-rancher", - "reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale" - ] + "reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale", + ], }, { - type: 'category', - label: 'Rancher-Managed Clusters', + type: "category", + label: "Rancher-Managed Clusters", link: { - type: 'doc', + type: "doc", id: "reference-guides/best-practices/rancher-managed-clusters/rancher-managed-clusters", }, items: [ @@ -843,37 +845,37 @@ const sidebars = { "reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices", "reference-guides/best-practices/rancher-managed-clusters/tips-to-set-up-containers", "reference-guides/best-practices/rancher-managed-clusters/rancher-managed-clusters-in-vsphere", - "reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters" - ] - } - ] + "reference-guides/best-practices/rancher-managed-clusters/disconnected-clusters", + ], + }, + ], }, { - type: 'category', - label: 'Rancher Architecture', + type: "category", + label: "Rancher Architecture", link: { - type: 'doc', + type: "doc", id: "reference-guides/rancher-manager-architecture/rancher-manager-architecture", }, items: [ "reference-guides/rancher-manager-architecture/rancher-server-and-components", "reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters", "reference-guides/rancher-manager-architecture/architecture-recommendations", - ] + ], }, { - type: 'category', - label: 'Cluster Configuration', + type: "category", + label: "Cluster Configuration", link: { - type: 'doc', + type: "doc", id: "reference-guides/cluster-configuration/cluster-configuration", }, items: [ { - type: 'category', - label: 'Rancher Server Configuration', + type: "category", + label: "Rancher Server Configuration", link: { - type: 'doc', + type: "doc", id: "reference-guides/cluster-configuration/rancher-server-configuration/rancher-server-configuration", }, items: [ @@ -883,43 +885,43 @@ const sidebars = { "reference-guides/cluster-configuration/rancher-server-configuration/eks-cluster-configuration", "reference-guides/cluster-configuration/rancher-server-configuration/aks-cluster-configuration", { - type: 'category', - label: 'GKE Cluster Configuration Reference', + type: "category", + label: "GKE Cluster Configuration Reference", link: { - type: 'doc', + type: "doc", id: "reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-cluster-configuration", }, items: [ "reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters", - ] + ], }, { - type: 'category', - label: 'Use Existing Nodes', + type: "category", + label: "Use Existing Nodes", link: { - type: 'doc', + type: "doc", id: "reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes", }, items: [ "reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options", - ] + ], }, "reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters", - ] + ], }, { - type: 'category', - label: 'Downstream Cluster Configuration', + type: "category", + label: "Downstream Cluster Configuration", link: { - type: 'doc', + type: "doc", id: "reference-guides/cluster-configuration/downstream-cluster-configuration/downstream-cluster-configuration", }, items: [ { - type: 'category', - label: 'Node Template Configuration', + type: "category", + label: "Node Template Configuration", link: { - type: 'doc', + type: "doc", id: "reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/node-template-configuration", }, items: [ @@ -928,42 +930,42 @@ const sidebars = { "reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/azure", "reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/vsphere", "reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/nutanix", - ] + ], }, { - type: 'category', - label: 'Machine Configuration', + type: "category", + label: "Machine Configuration", link: { - type: 'doc', + type: "doc", id: "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/machine-configuration", }, items: [ "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/amazon-ec2", "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/digitalocean", "reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/azure", - ] - } - ] - } - ] + ], + }, + ], + }, + ], }, { - type: 'category', - label: 'Single-Node Rancher in Docker', + type: "category", + label: "Single-Node Rancher in Docker", link: { - type: 'doc', + type: "doc", id: "reference-guides/single-node-rancher-in-docker/single-node-rancher-in-docker", }, items: [ "reference-guides/single-node-rancher-in-docker/http-proxy-configuration", "reference-guides/single-node-rancher-in-docker/advanced-options", - ] + ], }, { - type: 'category', - label: 'Backup & Restore Configuration', + type: "category", + label: "Backup & Restore Configuration", link: { - type: 'doc', + type: "doc", id: "reference-guides/backup-restore-configuration/backup-restore-configuration", }, items: [ @@ -975,10 +977,10 @@ const sidebars = { }, "reference-guides/kubernetes-concepts", { - type: 'category', - label: 'Monitoring Configuration Reference', + type: "category", + label: "Monitoring Configuration Reference", link: { - type: 'doc', + type: "doc", id: "reference-guides/monitoring-v2-configuration/monitoring-v2-configuration", }, items: [ @@ -990,21 +992,19 @@ const sidebars = { ], }, { - type: 'category', - label: 'Prometheus Federator', + type: "category", + label: "Prometheus Federator", link: { - type: 'doc', + type: "doc", id: "reference-guides/prometheus-federator/prometheus-federator", }, - items: [ - "reference-guides/prometheus-federator/rbac", - ] + items: ["reference-guides/prometheus-federator/rbac"], }, { - type: 'category', - label: 'User Settings', + type: "category", + label: "User Settings", link: { - type: 'doc', + type: "doc", id: "reference-guides/user-settings/user-settings", }, items: [ @@ -1015,16 +1015,16 @@ const sidebars = { ], }, { - type: 'category', - label: 'CLI with Rancher', + type: "category", + label: "CLI with Rancher", link: { - type: 'doc', + type: "doc", id: "reference-guides/cli-with-rancher/cli-with-rancher", }, items: [ "reference-guides/cli-with-rancher/rancher-cli", "reference-guides/cli-with-rancher/kubectl-utility", - ] + ], }, "reference-guides/rancher-cluster-tools", @@ -1035,26 +1035,26 @@ const sidebars = { "reference-guides/rke1-template-example-yaml", "reference-guides/rancher-webhook", { - type: 'category', - label: 'Rancher Security Guides', + type: "category", + label: "Rancher Security Guides", link: { - type: 'doc', + type: "doc", id: "reference-guides/rancher-security/rancher-security", }, items: [ { - type: 'category', - label: 'Hardening Guides', + type: "category", + label: "Hardening Guides", link: { - type: 'doc', + type: "doc", id: "reference-guides/rancher-security/hardening-guides/hardening-guides", }, items: [ { - type: 'category', - label: 'RKE Hardening Guides', + type: "category", + label: "RKE Hardening Guides", link: { - type: 'doc', + type: "doc", id: "reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide", }, items: [ @@ -1062,10 +1062,10 @@ const sidebars = { ], }, { - type: 'category', - label: 'RKE2 Hardening Guides', + type: "category", + label: "RKE2 Hardening Guides", link: { - type: 'doc', + type: "doc", id: "reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-hardening-guide", }, items: [ @@ -1073,10 +1073,10 @@ const sidebars = { ], }, { - type: 'category', - label: 'K3s Hardening Guides', + type: "category", + label: "K3s Hardening Guides", link: { - type: 'doc', + type: "doc", id: "reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-hardening-guide", }, items: [ @@ -1086,10 +1086,10 @@ const sidebars = { ], }, { - type: 'category', - label: 'SELinux RPM', + type: "category", + label: "SELinux RPM", link: { - type: 'doc', + type: "doc", id: "reference-guides/rancher-security/selinux-rpm/selinux-rpm", }, items: [ @@ -1101,112 +1101,104 @@ const sidebars = { "reference-guides/rancher-security/rancher-security-best-practices", "reference-guides/rancher-security/security-advisories-and-cves", "reference-guides/rancher-security/psa-restricted-exemptions", - "reference-guides/rancher-security/rancher-webhook-hardening" + "reference-guides/rancher-security/rancher-webhook-hardening", ], - } - ] + }, + ], }, { - "type": "category", - "label": "Integrations in Rancher", - "link": { - "type": "doc", - "id": "integrations-in-rancher/integrations-in-rancher" + type: "category", + label: "Integrations in Rancher", + link: { + type: "doc", + id: "integrations-in-rancher/integrations-in-rancher", }, - "items": [ + items: [ "integrations-in-rancher/kubernetes-distributions/kubernetes-distributions", { - "type": "category", - "label": "Virtualization on Kubernetes with Harvester", - "link": { - "type": "doc", - "id": "integrations-in-rancher/harvester/harvester" + type: "category", + label: "Virtualization on Kubernetes with Harvester", + link: { + type: "doc", + id: "integrations-in-rancher/harvester/harvester", }, - "items": [ - "integrations-in-rancher/harvester/overview" - ] + items: ["integrations-in-rancher/harvester/overview"], }, { - "type": "category", - "label": "Cloud Native Storage with Longhorn", - "link": { - "type": "doc", - "id": "integrations-in-rancher/longhorn/longhorn" + type: "category", + label: "Cloud Native Storage with Longhorn", + link: { + type: "doc", + id: "integrations-in-rancher/longhorn/longhorn", }, - "items": [ - "integrations-in-rancher/longhorn/overview" - ] + items: ["integrations-in-rancher/longhorn/overview"], }, { - "type": "category", - "label": "Container Security with Neuvector", - "link": { - "type": "doc", - "id": "integrations-in-rancher/neuvector/neuvector" + type: "category", + label: "Container Security with Neuvector", + link: { + type: "doc", + id: "integrations-in-rancher/neuvector/neuvector", }, - "items": [ - "integrations-in-rancher/neuvector/overview" - ] + items: ["integrations-in-rancher/neuvector/overview"], }, "integrations-in-rancher/suse-observability/suse-observability", "integrations-in-rancher/kubewarden/kubewarden", "integrations-in-rancher/elemental/elemental", { - "type": "category", - "label": "Continuous Delivery with Fleet", - "link": { - "type": "doc", - "id": "integrations-in-rancher/fleet/fleet" + type: "category", + label: "Continuous Delivery with Fleet", + link: { + type: "doc", + id: "integrations-in-rancher/fleet/fleet", }, - "items": [ + items: [ "integrations-in-rancher/fleet/overview", "integrations-in-rancher/fleet/architecture", "integrations-in-rancher/fleet/windows-support", - "integrations-in-rancher/fleet/use-fleet-behind-a-proxy" - ] + "integrations-in-rancher/fleet/use-fleet-behind-a-proxy", + ], }, { - "type": "category", - "label": "Cluster API (CAPI) with Rancher Turtles", - "link": { - "type": "doc", - "id": "integrations-in-rancher/cluster-api/cluster-api" + type: "category", + label: "Cluster API (CAPI) with Rancher Turtles", + link: { + type: "doc", + id: "integrations-in-rancher/cluster-api/cluster-api", }, - "items": [ - "integrations-in-rancher/cluster-api/overview" - ] + items: ["integrations-in-rancher/cluster-api/overview"], }, "integrations-in-rancher/rancher-desktop", { - type: 'category', - label: 'Cloud Marketplace Integration', + type: "category", + label: "Cloud Marketplace Integration", link: { - type: 'doc', - id: "integrations-in-rancher/cloud-marketplace/cloud-marketplace" + type: "doc", + id: "integrations-in-rancher/cloud-marketplace/cloud-marketplace", }, items: [ { - type: 'category', - label: 'AWS Marketplace Integration', + type: "category", + label: "AWS Marketplace Integration", link: { - type: 'doc', - id: "integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/aws-cloud-marketplace" + type: "doc", + id: "integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/aws-cloud-marketplace", }, items: [ - 'integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements', - 'integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter', - 'integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter', - 'integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues' - ] + "integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/adapter-requirements", + "integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter", + "integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/uninstall-adapter", + "integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/common-issues", + ], }, - 'integrations-in-rancher/cloud-marketplace/supportconfig' - ] + "integrations-in-rancher/cloud-marketplace/supportconfig", + ], }, { - type: 'category', - label: 'CIS Scans', + type: "category", + label: "CIS Scans", link: { - type: 'doc', + type: "doc", id: "integrations-in-rancher/cis-scans/cis-scans", }, items: [ @@ -1217,10 +1209,10 @@ const sidebars = { ], }, { - type: 'category', - label: 'Istio', + type: "category", + label: "Istio", link: { - type: 'doc', + type: "doc", id: "integrations-in-rancher/istio/istio", }, items: [ @@ -1228,25 +1220,25 @@ const sidebars = { "integrations-in-rancher/istio/rbac-for-istio", "integrations-in-rancher/istio/disable-istio", { - type: 'category', - label: 'Configuration Options', + type: "category", + label: "Configuration Options", link: { - type: 'doc', + type: "doc", id: "integrations-in-rancher/istio/configuration-options/configuration-options", }, items: [ "integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations", "integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster", "integrations-in-rancher/istio/configuration-options/project-network-isolation", - ] - } - ] + ], + }, + ], }, { - type: 'category', - label: 'Logging', + type: "category", + label: "Logging", link: { - type: 'doc', + type: "doc", id: "integrations-in-rancher/logging/logging", }, items: [ @@ -1255,22 +1247,22 @@ const sidebars = { "integrations-in-rancher/logging/logging-helm-chart-options", "integrations-in-rancher/logging/taints-and-tolerations", { - type: 'category', - label: 'Custom Resource Configuration', + type: "category", + label: "Custom Resource Configuration", link: { - type: 'doc', + type: "doc", id: "integrations-in-rancher/logging/custom-resource-configuration/custom-resource-configuration", }, items: [ "integrations-in-rancher/logging/custom-resource-configuration/flows-and-clusterflows", - "integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs" - ] - } - ] + "integrations-in-rancher/logging/custom-resource-configuration/outputs-and-clusteroutputs", + ], + }, + ], }, { - type: 'category', - label: 'Monitoring and Alerting', + type: "category", + label: "Monitoring and Alerting", link: { type: "doc", id: "integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting", @@ -1281,14 +1273,14 @@ const sidebars = { "integrations-in-rancher/monitoring-and-alerting/built-in-dashboards", "integrations-in-rancher/monitoring-and-alerting/windows-support", "integrations-in-rancher/monitoring-and-alerting/promql-expressions", - ] + ], }, "integrations-in-rancher/rancher-extensions", - ] + ], }, { - type: 'category', - label: 'FAQ', + type: "category", + label: "FAQ", items: [ "faq/general-faq", "faq/deprecated-features", @@ -1298,18 +1290,18 @@ const sidebars = { "faq/security", "faq/container-network-interface-providers", "faq/rancher-is-no-longer-needed", - ] + ], }, { - type: 'category', - label: 'Troubleshooting', + type: "category", + label: "Troubleshooting", items: [ "troubleshooting/general-troubleshooting", { - type: 'category', - label: 'Kubernetes Components', + type: "category", + label: "Kubernetes Components", link: { - type: 'doc', + type: "doc", id: "troubleshooting/kubernetes-components/kubernetes-components", }, items: [ @@ -1317,11 +1309,11 @@ const sidebars = { "troubleshooting/kubernetes-components/troubleshooting-controlplane-nodes", "troubleshooting/kubernetes-components/troubleshooting-nginx-proxy", "troubleshooting/kubernetes-components/troubleshooting-worker-nodes-and-generic-components", - ] + ], }, { - type: 'category', - label: 'Other Troubleshooting Tips', + type: "category", + label: "Other Troubleshooting Tips", items: [ "troubleshooting/other-troubleshooting-tips/kubernetes-resources", "troubleshooting/other-troubleshooting-tips/networking", @@ -1331,32 +1323,30 @@ const sidebars = { "troubleshooting/other-troubleshooting-tips/logging", "troubleshooting/other-troubleshooting-tips/user-id-tracking-in-audit-logs", "troubleshooting/other-troubleshooting-tips/expired-webhook-certificate-rotation", - ] - } - ] + ], + }, + ], }, { - "type": "category", - "label": "Rancher Kubernetes API", - "items": [ + type: "category", + label: "Rancher Kubernetes API", + items: [ "api/quickstart", { - "type": "category", - "label": "Example Workflows", - "items": [ - "api/workflows/projects", + type: "category", + label: "Example Workflows", + items: ["api/workflows/projects", "api/workflows/kubeconfigs", - "api/workflows/tokens" - ] + "api/workflows/tokens"], }, "api/api-reference", "api/api-tokens", "api/extension-apiserver", "api/v3-rancher-api-guide", - ] + ], }, "contribute-to-rancher", "glossary", - ] -} + ], +}; module.exports = sidebars; From 413dc6dbfced72f074fb4bad91a3dfd0c5e87303 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Tue, 15 Jul 2025 09:41:47 +0530 Subject: [PATCH 22/57] refactor: update cis scan refrences --- .../compliance-scan-guides/compliance-scan-guides.md | 2 +- docs/integrations-in-rancher/cis-scans/cis-scans.md | 4 ++-- .../rancher-managed-clusters/monitoring-best-practices.md | 2 +- .../rke2-cluster-configuration.md | 4 ++-- .../monitoring-v2-configuration/receivers.md | 2 +- docs/reference-guides/rancher-cluster-tools.md | 2 +- docs/reference-guides/rancher-security/rancher-security.md | 6 +++--- docs/shared-files/_cluster-capabilities-table.md | 2 +- 8 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md index 4b304a13bea..87b8a1fa1db 100644 --- a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md @@ -14,4 +14,4 @@ title: Compliance Scan Guides - [View Reports](view-reports.md) - [Enable Alerting for rancher-compliance](enable-alerting-for-rancher-compliance.md) - [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md) -- [Create a Custom Benchmark Version to Run](create-a-custom-benchmark-version-to-run.md) +- [Create a Custom Benchmark Version to Run](create-a-custom-compliance-version-to-run.md) diff --git a/docs/integrations-in-rancher/cis-scans/cis-scans.md b/docs/integrations-in-rancher/cis-scans/cis-scans.md index f170f997d66..6da7461d656 100644 --- a/docs/integrations-in-rancher/cis-scans/cis-scans.md +++ b/docs/integrations-in-rancher/cis-scans/cis-scans.md @@ -103,7 +103,7 @@ The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. ## About Skipped and Not Applicable Tests -For a list of skipped and not applicable tests, refer to [this page](../../how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md). +For a list of skipped and not applicable tests, refer to [this page](../../how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md). For now, only user-defined skipped tests are marked as skipped in the generated report. @@ -119,4 +119,4 @@ For more information about configuring the custom resources for the scans, profi ## How-to Guides -Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) to learn how to run CIS scans. +Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run CIS scans. diff --git a/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 108689861dd..afa372b23a4 100644 --- a/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -98,7 +98,7 @@ Monitoring the availability and performance of all your internal workloads is vi ## Security Monitoring -In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [CIS Scans](../../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) which check if the cluster is configured according to security best practices. +In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [CIS Scans](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) which check if the cluster is configured according to security best practices. For the workloads, you can have a look at Kubernetes and Container security solutions like [NeuVector](https://www.suse.com/products/neuvector/), [Falco](https://falco.org/), [Aqua Kubernetes Security](https://www.aquasec.com/solutions/kubernetes-container-security/), [SysDig](https://sysdig.com/). diff --git a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md index a76cb30552d..38eb81d2530 100644 --- a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md +++ b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md @@ -133,9 +133,9 @@ If the cloud provider you want to use is not listed as an option, you will need The default [pod security admission configuration template](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for the cluster. -##### Worker CIS Profile +##### Worker compliance Profile -Select a [CIS benchmark](../../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) to validate the system configuration against. +Select a [compliance benchmark](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to validate the system configuration against. ##### Project Network Isolation diff --git a/docs/reference-guides/monitoring-v2-configuration/receivers.md b/docs/reference-guides/monitoring-v2-configuration/receivers.md index 811abcf8e96..16e3940e735 100644 --- a/docs/reference-guides/monitoring-v2-configuration/receivers.md +++ b/docs/reference-guides/monitoring-v2-configuration/receivers.md @@ -373,7 +373,7 @@ spec: # key: string ``` -For more information on enabling alerting for `rancher-cis-benchmark`, see [this section.](../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-compliance.md) +For more information on enabling alerting for `rancher-compliance-benchmark`, see [this section.](../../how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md) ## Trusted CA for Notifiers diff --git a/docs/reference-guides/rancher-cluster-tools.md b/docs/reference-guides/rancher-cluster-tools.md index ad46fbdd9d2..739abf9af26 100644 --- a/docs/reference-guides/rancher-cluster-tools.md +++ b/docs/reference-guides/rancher-cluster-tools.md @@ -46,4 +46,4 @@ For more information, refer to the Istio documentation [here.](../integrations-i Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. -For more information, refer to the CIS scan documentation [here.](../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) \ No newline at end of file +For more information, refer to the Compliance scan documentation [here.](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) \ No newline at end of file diff --git a/docs/reference-guides/rancher-security/rancher-security.md b/docs/reference-guides/rancher-security/rancher-security.md index f16699b8ac6..21c67f25507 100644 --- a/docs/reference-guides/rancher-security/rancher-security.md +++ b/docs/reference-guides/rancher-security/rancher-security.md @@ -31,7 +31,7 @@ On this page, we provide security related documentation along with resources to NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. -## Running a CIS Security Scan on a Kubernetes Cluster +## Running a Compliance Security Scan on a Kubernetes Cluster Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. @@ -45,8 +45,8 @@ The Benchmark provides recommendations of two types: Automated and Manual. We ru When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. -For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md). - +For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md). +` ## SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. diff --git a/docs/shared-files/_cluster-capabilities-table.md b/docs/shared-files/_cluster-capabilities-table.md index e53e3471aad..c4807fa5dba 100644 --- a/docs/shared-files/_cluster-capabilities-table.md +++ b/docs/shared-files/_cluster-capabilities-table.md @@ -8,7 +8,7 @@ | [Managing Projects, Namespaces and Workloads](../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) | ✓ | ✓ | ✓ | ✓ | | [Using App Catalogs](../how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md) | ✓ | ✓ | ✓ | ✓ | | Configuring Tools ([Alerts, Notifiers, Monitoring](../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md), [Logging](../integrations-in-rancher/logging/logging.md), [Istio](../integrations-in-rancher/istio/istio.md)) | ✓ | ✓ | ✓ | ✓ | -| [Running Security Scans](../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) | ✓ | ✓ | ✓ | ✓ | +| [Running Security Scans](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) | ✓ | ✓ | ✓ | ✓ | | [Ability to rotate certificates](../how-to-guides/new-user-guides/manage-clusters/rotate-certificates.md) | ✓ | ✓ | | | | Ability to [backup](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md) and [restore](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Rancher-launched clusters | ✓ | ✓ | | ✓4 | | [Cleaning Kubernetes components when clusters are no longer reachable from Rancher](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) | ✓ | | | | From f0d5b421dabca873a5d43350dbb59170275de0e3 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Tue, 15 Jul 2025 10:37:35 +0530 Subject: [PATCH 23/57] refactor: move cis scans to compliance scans in rancher intergration doc --- ...eate-a-custom-compliance-version-to-run.md | 2 +- .../cis-scans/rbac-for-cis-scans.md | 52 ------------------- .../compliance-scans.md} | 14 ++--- .../configuration-reference.md | 28 +++++----- .../custom-benchmark.md | 14 ++--- .../rbac-for-compliance-scans.md | 48 +++++++++++++++++ .../skipped-and-not-applicable-tests.md | 2 +- docusaurus.config.js | 16 +++++- sidebars.js | 12 ++--- 9 files changed, 98 insertions(+), 90 deletions(-) delete mode 100644 docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md rename docs/integrations-in-rancher/{cis-scans/cis-scans.md => compliance-scans/compliance-scans.md} (91%) rename docs/integrations-in-rancher/{cis-scans => compliance-scans}/configuration-reference.md (68%) rename docs/integrations-in-rancher/{cis-scans => compliance-scans}/custom-benchmark.md (85%) create mode 100644 docs/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans.md rename docs/integrations-in-rancher/{cis-scans => compliance-scans}/skipped-and-not-applicable-tests.md (99%) diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md index 97a896db883..a15fc96a7fd 100644 --- a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md @@ -10,4 +10,4 @@ There could be some Kubernetes cluster setups that require custom configurations It is now possible to create a custom compliance version for running a cluster scan using the `rancher-compliance` application. -For details, see [this page.](../../../integrations-in-rancher/cis-scans/custom-benchmark.md) \ No newline at end of file +For details, see [this page.](../../../integrations-in-rancher/compliance-scans/custom-benchmark.md) \ No newline at end of file diff --git a/docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md b/docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md deleted file mode 100644 index 795e64cef29..00000000000 --- a/docs/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Roles-based Access Control ---- - - - - - -This section describes the permissions required to use the rancher-cis-benchmark App. - -The rancher-cis-benchmark is a cluster-admin only feature by default. - -However, the `rancher-cis-benchmark` chart installs these two default `ClusterRoles`: - -- cis-admin -- cis-view - -In Rancher, only cluster owners and global administrators have `cis-admin` access by default. - -Note: If you were using the `cis-edit` role added in Rancher v2.5 setup, it has now been removed since -Rancher v2.5.2 because it essentially is same as `cis-admin`. If you happen to create any clusterrolebindings -for `cis-edit`, please update them to use `cis-admin` ClusterRole instead. - -## Cluster-Admin Access - -Rancher CIS Scans is a cluster-admin only feature by default. -This means only the Rancher global admins, and the cluster’s cluster-owner can: - -- Install/Uninstall the rancher-cis-benchmark App -- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans -- List the default ClusterScanBenchmarks and ClusterScanProfiles -- Create/Edit/Delete new ClusterScanProfiles -- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster -- View and Download the ClusterScanReport created after the ClusterScan is complete - - -## Summary of Default Permissions for Kubernetes Default Roles - -The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`: - -| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role -| ------------------------------| ---------------------------| ---------------------------| -| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR -| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR - - -By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature. - -The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-cis-benchmark resources. - -But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above CIS ClusterRoles manually. -There is no automatic role aggregation supported for the `rancher-cis-benchmark` ClusterRoles. diff --git a/docs/integrations-in-rancher/cis-scans/cis-scans.md b/docs/integrations-in-rancher/compliance-scans/compliance-scans.md similarity index 91% rename from docs/integrations-in-rancher/cis-scans/cis-scans.md rename to docs/integrations-in-rancher/compliance-scans/compliance-scans.md index 6da7461d656..06663aaf770 100644 --- a/docs/integrations-in-rancher/cis-scans/cis-scans.md +++ b/docs/integrations-in-rancher/compliance-scans/compliance-scans.md @@ -1,14 +1,14 @@ --- -title: CIS Scans +title: Compliance Scans --- - + -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. +Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The Compliance scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. -The `rancher-cis-benchmark` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. +The `rancher-compliance` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. ## About the CIS Benchmark @@ -94,7 +94,7 @@ In order to pass the "Hardened" profile, you will need to follow the steps on th The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned: -The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. +The `rancher-compliance` supports the CIS 1.6 Benchmark version. - For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default. - EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters. @@ -111,7 +111,7 @@ Any skipped tests that are defined as being skipped by one of the default profil ## Roles-based Access Control -For information about permissions, refer to [this page](rbac-for-cis-scans.md) +For information about permissions, refer to [this page](rbac-for-compliance-scans.md) ## Configuration @@ -119,4 +119,4 @@ For more information about configuring the custom resources for the scans, profi ## How-to Guides -Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run CIS scans. +Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run CIS scans. \ No newline at end of file diff --git a/docs/integrations-in-rancher/cis-scans/configuration-reference.md b/docs/integrations-in-rancher/compliance-scans/configuration-reference.md similarity index 68% rename from docs/integrations-in-rancher/cis-scans/configuration-reference.md rename to docs/integrations-in-rancher/compliance-scans/configuration-reference.md index 3394bc2702b..9ad9c40b18e 100644 --- a/docs/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/docs/integrations-in-rancher/compliance-scans/configuration-reference.md @@ -3,27 +3,27 @@ title: Configuration --- - + -This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. +This configuration reference is intended to help you manage the custom resources created by the `rancher-compliance` application. These resources are used for performing compliance scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. -To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, +To configure the custom resources, go to the **Cluster Dashboard** To configure the compliance scans, 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark**. +1. On the **Clusters** page, go to the cluster where you want to configure compliance scans and click **Explore**. +1. In the left navigation bar, click **Compliance**. ## Scans -A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. +A scan is created to trigger a compliance scan on the cluster based on the defined profile. A report is created after the scan is completed. When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive. An example ClusterScan custom resource is below: ```yaml -apiVersion: cis.cattle.io/v1 +apiVersion: compliance.cattle.io/v1 kind: ClusterScan metadata: name: rke-cis @@ -33,11 +33,11 @@ spec: ## Profiles -A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. +A profile contains the configuration for the compliance scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. :::caution -By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. +By default, a few ClusterScanProfiles are installed as part of the `rancher-compliance` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. ::: @@ -50,12 +50,12 @@ When you create a new profile, you will also need to give it a name. An example `ClusterScanProfile` is below: ```yaml -apiVersion: cis.cattle.io/v1 +apiVersion: compliance.cattle.io/v1 kind: ClusterScanProfile metadata: annotations: meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system + meta.helm.sh/release-namespace: compliance-operator-system labels: app.kubernetes.io/managed-by: Helm name: "" @@ -70,7 +70,7 @@ spec: A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. -A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. +A `ClusterScanBenchmark` defines the Compliance `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. @@ -89,12 +89,12 @@ A ClusterScanBenchmark consists of the fields: An example `ClusterScanBenchmark` is below: ```yaml -apiVersion: cis.cattle.io/v1 +apiVersion: compliance.cattle.io/v1 kind: ClusterScanBenchmark metadata: annotations: meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system + meta.helm.sh/release-namespace: compliance-operator-system creationTimestamp: "2020-08-28T18:18:07Z" generation: 1 labels: diff --git a/docs/integrations-in-rancher/cis-scans/custom-benchmark.md b/docs/integrations-in-rancher/compliance-scans/custom-benchmark.md similarity index 85% rename from docs/integrations-in-rancher/cis-scans/custom-benchmark.md rename to docs/integrations-in-rancher/compliance-scans/custom-benchmark.md index 4ec353cc60b..ecf83196136 100644 --- a/docs/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/docs/integrations-in-rancher/compliance-scans/custom-benchmark.md @@ -3,15 +3,15 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan --- - + -Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. -The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. +Each Benchmark Version defines a set of test configuration files that define the Compliance tests to be run by the kube-bench tool. +The `rancher-compliance` application installs a few default Benchmark Versions which are listed under Compliance application menu. But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. +It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-compliance` application. When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version. @@ -46,7 +46,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Benchmark Version**. +1. In the left navigation bar, click **Compliance > Benchmark Version**. 1. Click **Create**. 1. Enter the **Name** and a description for your custom benchmark version. 1. Choose the cluster provider that your benchmark version applies to. @@ -60,7 +60,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Profile**. +1. In the left navigation bar, click **Compliance > Profile**. 1. Click **Create**. 1. Provide a **Name** and description. In this example, we name it `foo-profile`. 1. Choose the Benchmark Version from the dropdown. @@ -74,7 +74,7 @@ To run a scan, 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Scan**. +1. In the left navigation bar, click **Compliance > Scan**. 1. Click **Create**. 1. Choose the new cluster scan profile. 1. Click **Create**. diff --git a/docs/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans.md b/docs/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans.md new file mode 100644 index 00000000000..71348248436 --- /dev/null +++ b/docs/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans.md @@ -0,0 +1,48 @@ +--- +title: Roles-based Access Control +--- + + + + + +This section describes the permissions required to use the rancher-compliance App. + +The rancher-compliance is a cluster-admin only feature by default. + +However, the `rancher-compliance` chart installs these two default `ClusterRoles`: + +- compliance-admin +- compliance-view + +In Rancher, only cluster owners and global administrators have `compliance-admin` access by default. + +## Cluster-Admin Access + +Rancher Compliance Scans is a cluster-admin only feature by default. +This means only the Rancher global admins, and the cluster’s cluster-owner can: + +- Install/Uninstall the rancher-compliance App +- See the navigation links for Compliance CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans +- List the default ClusterScanBenchmarks and ClusterScanProfiles +- Create/Edit/Delete new ClusterScanProfiles +- Create/Edit/Delete a new ClusterScan to run the Compliance scan on the cluster +- View and Download the ClusterScanReport created after the ClusterScan is complete + + +## Summary of Default Permissions for Kubernetes Default Roles + +The rancher-compliance creates three `ClusterRoles` and adds the Compliance CRD access to the following default K8s `ClusterRoles`: + +| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role +| ------------------------------| ---------------------------| ---------------------------| +| `compliance-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR +| `compliance-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR + + +By default only cluster-owner role will have ability to manage and use `rancher-compliance` feature. + +The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-compliance resources. + +But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above Compliance ClusterRoles manually. +There is no automatic role aggregation supported for the `rancher-compliance` ClusterRoles. diff --git a/docs/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md b/docs/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md similarity index 99% rename from docs/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md rename to docs/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md index 3920a1588c5..965d015d9c8 100644 --- a/docs/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md +++ b/docs/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md @@ -3,7 +3,7 @@ title: Skipped and Not Applicable Tests --- - + This section lists the tests that are skipped in the permissive test profile for RKE. diff --git a/docusaurus.config.js b/docusaurus.config.js index 9dd2f0eb776..c9776ee1ff1 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -1628,8 +1628,20 @@ module.exports = { to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run", }, { - from: "/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides", - to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides", + from: "/integrations-in-rancher/cis-scans/configuration-reference", + to: "/integrations-in-rancher/compliance-scans/configuration-reference", + }, + { + from: "/integrations-in-rancher/cis-scans/rbac-for-cis-scans", + to: "/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans", + }, + { + from: "/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests", + to: "/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests", + }, + { + from: "/integrations-in-rancher/cis-scans/custom-benchmark", + to: "/integrations-in-rancher/compliance-scans/custom-benchmark", }, ], }, diff --git a/sidebars.js b/sidebars.js index a62839cdc35..fb285be5a75 100644 --- a/sidebars.js +++ b/sidebars.js @@ -1196,16 +1196,16 @@ const sidebars = { }, { type: "category", - label: "CIS Scans", + label: "Compliance Scans", link: { type: "doc", - id: "integrations-in-rancher/cis-scans/cis-scans", + id: "integrations-in-rancher/compliance-scans/compliance-scans", }, items: [ - "integrations-in-rancher/cis-scans/configuration-reference", - "integrations-in-rancher/cis-scans/rbac-for-cis-scans", - "integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests", - "integrations-in-rancher/cis-scans/custom-benchmark", + "integrations-in-rancher/compliance-scans/configuration-reference", + "integrations-in-rancher/compliance-scans/rbac-for-compliance-scans", + "integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests", + "integrations-in-rancher/compliance-scans/custom-benchmark", ], }, { From 8ef8637d4f2cb4caa5c9e55962e9fd1ca1104bd1 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Tue, 15 Jul 2025 11:05:26 +0530 Subject: [PATCH 24/57] change the documentation for compliance in versioned_docs 2.12 --- .../cis-scan-guides/cis-scan-guides.md | 17 ------ ...reate-a-custom-benchmark-version-to-run.md | 13 ----- ...able-alerting-for-rancher-cis-benchmark.md | 24 --------- .../install-rancher-cis-benchmark.md | 15 ------ .../cis-scan-guides/run-a-scan.md | 26 ---------- .../uninstall-rancher-cis-benchmark.md | 13 ----- .../cis-scan-guides/view-reports.md | 23 -------- .../compliance-scan-guides.md | 17 ++++++ ...-alerts-for-periodic-scan-on-a-schedule.md | 16 +++--- ...eate-a-custom-compliance-version-to-run.md | 13 +++++ .../enable-alerting-for-rancher-compliance.md | 24 +++++++++ .../install-rancher-compliance.md | 21 ++++++++ .../run-a-scan-periodically-on-a-schedule.md | 8 +-- .../compliance-scan-guides/run-a-scan.md | 26 ++++++++++ .../skip-tests.md | 20 +++---- .../uninstall-rancher-compliance.md | 13 +++++ .../compliance-scan-guides/view-reports.md | 23 ++++++++ .../cis-scans/rbac-for-cis-scans.md | 52 ------------------- .../compliance-scans.md} | 16 +++--- .../configuration-reference.md | 28 +++++----- .../custom-benchmark.md | 14 ++--- .../rbac-for-compliance-scans.md | 48 +++++++++++++++++ .../skipped-and-not-applicable-tests.md | 2 +- .../monitoring-best-practices.md | 2 +- versioned_sidebars/version-2.12-sidebars.json | 34 ++++++------ 25 files changed, 255 insertions(+), 253 deletions(-) delete mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md delete mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run.md delete mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md delete mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark.md delete mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan.md delete mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark.md delete mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports.md create mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md rename versioned_docs/version-2.12/how-to-guides/advanced-user-guides/{cis-scan-guides => compliance-scan-guides}/configure-alerts-for-periodic-scan-on-a-schedule.md (56%) create mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md create mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md create mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md rename versioned_docs/version-2.12/how-to-guides/advanced-user-guides/{cis-scan-guides => compliance-scan-guides}/run-a-scan-periodically-on-a-schedule.md (75%) create mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan.md rename versioned_docs/version-2.12/how-to-guides/advanced-user-guides/{cis-scan-guides => compliance-scan-guides}/skip-tests.md (55%) create mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance.md create mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports.md delete mode 100644 versioned_docs/version-2.12/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md rename versioned_docs/version-2.12/integrations-in-rancher/{cis-scans/cis-scans.md => compliance-scans/compliance-scans.md} (90%) rename versioned_docs/version-2.12/integrations-in-rancher/{cis-scans => compliance-scans}/configuration-reference.md (68%) rename versioned_docs/version-2.12/integrations-in-rancher/{cis-scans => compliance-scans}/custom-benchmark.md (85%) create mode 100644 versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans.md rename versioned_docs/version-2.12/integrations-in-rancher/{cis-scans => compliance-scans}/skipped-and-not-applicable-tests.md (99%) diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md deleted file mode 100644 index a7c6ed43472..00000000000 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: CIS Scan Guides ---- - - - - - -- [Install rancher-cis-benchmark](install-rancher-cis-benchmark.md) -- [Uninstall rancher-cis-benchmark](uninstall-rancher-cis-benchmark.md) -- [Run a Scan](run-a-scan.md) -- [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md) -- [Skip Tests](skip-tests.md) -- [View Reports](view-reports.md) -- [Enable Alerting for rancher-cis-benchmark](enable-alerting-for-rancher-cis-benchmark.md) -- [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md) -- [Create a Custom Benchmark Version to Run](create-a-custom-benchmark-version-to-run.md) \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run.md deleted file mode 100644 index 8d3b66c7e4e..00000000000 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Create a Custom Benchmark Version for Running a Cluster Scan ---- - - - - - -There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. - -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. - -For details, see [this page.](../../../integrations-in-rancher/cis-scans/custom-benchmark.md) \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md deleted file mode 100644 index ef2b5ae330d..00000000000 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: Enable Alerting for Rancher CIS Benchmark ---- - - - - - -Alerts can be configured to be sent out for a scan that runs on a schedule. - -:::note Prerequisite: - -Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md) - -While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-cis-scan-alerts) - -::: - -While installing or upgrading the `rancher-cis-benchmark` Helm chart, set the following flag to `true` in the `values.yaml`: - -```yaml -alerts: - enabled: true -``` \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark.md deleted file mode 100644 index c6987a97c64..00000000000 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Install Rancher CIS Benchmark ---- - - - - - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to install CIS Benchmark and click **Explore**. -1. In the left navigation bar, click **Apps > Charts**. -1. Click **CIS Benchmark** -1. Click **Install**. - -**Result:** The CIS scan application is deployed on the Kubernetes cluster. diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan.md deleted file mode 100644 index 2fede69bee6..00000000000 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Run a Scan ---- - - - - - -When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile. - -:::note - -There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. - -::: - -To run a scan, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Click **Create**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. -1. Click **Create**. - -**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark.md deleted file mode 100644 index df23f7abbdc..00000000000 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uninstall Rancher CIS Benchmark ---- - - - - - -1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps > Installed Apps**. -1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`. -1. Click **Delete** and confirm **Delete**. - -**Result:** The `rancher-cis-benchmark` application is uninstalled. \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports.md deleted file mode 100644 index bb9045033bc..00000000000 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: View Reports ---- - - - - - -To view the generated CIS scan reports, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. - -One can download the report from the Scans list or from the scan detail page. - -To get the verbose version of the CIS scan results, run the following command on the cluster that was scanned. Note that the scan must be completed before this can be done. - -```console -export REPORT="scan-report-name" -kubectl get clusterscanreport $REPORT -o json |jq ".spec.reportJSON | fromjson" | jq -r ".actual_value_map_data" | base64 -d | gunzip | jq . -``` diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md new file mode 100644 index 00000000000..87b8a1fa1db --- /dev/null +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md @@ -0,0 +1,17 @@ +--- +title: Compliance Scan Guides +--- + + + + + +- [Install rancher-compliance](install-rancher-compliance.md) +- [Uninstall rancher-compliance](uninstall-rancher-compliance.md) +- [Run a Scan](run-a-scan.md) +- [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md) +- [Skip Tests](skip-tests.md) +- [View Reports](view-reports.md) +- [Enable Alerting for rancher-compliance](enable-alerting-for-rancher-compliance.md) +- [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md) +- [Create a Custom Benchmark Version to Run](create-a-custom-compliance-version-to-run.md) diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md similarity index 56% rename from versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md rename to versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md index 204f95c05bd..5dfad6be847 100644 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule.md @@ -3,7 +3,7 @@ title: Configure Alerts for Periodic Scan on a Schedule --- - + It is possible to run a ClusterScan on a schedule. @@ -12,27 +12,27 @@ A scheduled scan can also specify if you should receive alerts when the scan com Alerts are supported only for a scan that runs on a schedule. -The CIS Benchmark application supports two types of alerts: +The compliance application supports two types of alerts: - Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name. - Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state. :::note Prerequisite -Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md) +Before enabling alerts for `rancher-compliance`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md) -While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-cis-scan-alerts) +While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-compliance-scan-alerts) ::: To configure alerts for a scan that runs on a schedule, -1. Please enable alerts on the `rancher-cis-benchmark` application. For more information, see [this page](../../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md). +1. Please enable alerts on the `rancher-compliance` application. For more information, see [this page](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md). 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. +1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. +1. Click **compliance > Scan**. 1. Click **Create**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Choose a cluster scan profile. The profile determines which compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. 1. Choose the option **Run scan on a schedule**. 1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**. 1. Check the boxes next to the Alert types under **Alerting**. diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md new file mode 100644 index 00000000000..a15fc96a7fd --- /dev/null +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run.md @@ -0,0 +1,13 @@ +--- +title: Create a Custom Compliance Version for Running a Cluster Scan +--- + + + + + +There could be some Kubernetes cluster setups that require custom configurations of the Compliance tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream Compliance look for them. + +It is now possible to create a custom compliance version for running a cluster scan using the `rancher-compliance` application. + +For details, see [this page.](../../../integrations-in-rancher/compliance-scans/custom-benchmark.md) \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md new file mode 100644 index 00000000000..d5328a0dd0c --- /dev/null +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md @@ -0,0 +1,24 @@ +--- +title: Enable Alerting for Rancher Compliance +--- + + + + + +Alerts can be configured to be sent out for a scan that runs on a schedule. + +:::note Prerequisite: + +Before enabling alerts for `rancher-compliance`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md) + +While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-compliance-scan-alerts) + +::: + +While installing or upgrading the `rancher-compliance` Helm chart, set the following flag to `true` in the `values.yaml`: + +```yaml +alerts: + enabled: true +``` \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md new file mode 100644 index 00000000000..d7a00786ea5 --- /dev/null +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md @@ -0,0 +1,21 @@ +--- +title: Install Rancher Compliance +--- + + + + + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to install Compliance and click **Explore**. +1. In the left navigation bar, click **Apps > Charts**. +1. Click **Compliance** +1. Click **Install**. + +**Result:** The compliance scan application is deployed on the Kubernetes cluster. + +:::note + +If you are running Kubernetes v1.24 or earlier, and have a [Pod Security Policy](../../new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md) (PSP) hardened cluster, Compliance 4.0.0 and later disable PSPs by default. To install Compliance on a PSP-hardened cluster, set `global.psp.enabled` to `true` in the values before installing the chart. [Pod Security Admission](../../new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md) (PSA) hardened clusters aren't affected. + +::: diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule.md similarity index 75% rename from versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule.md rename to versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule.md index 076fbdf409b..49a9126da64 100644 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule.md +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule.md @@ -3,15 +3,15 @@ title: Run a Scan Periodically on a Schedule --- - + To run a ClusterScan on a schedule, 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. +1. Click **Compliance > Scan**. +1. Choose a cluster scan profile. The profile determines which Compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. 1. Choose the option **Run scan on a schedule**. 1. Enter a valid cron schedule expression in the field **Schedule**. 1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan.md new file mode 100644 index 00000000000..55fc296f88d --- /dev/null +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan.md @@ -0,0 +1,26 @@ +--- +title: Run a Scan +--- + + + + + +When a ClusterScan custom resource is created, it launches a new compliance scan on the cluster for the chosen ClusterScanProfile. + +:::note + +There is currently a limitation of running only one compliance scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. + +::: + +To run a scan, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a compliance scan and click **Explore**. +1. Click **Compliance > Scan**. +1. Click **Create**. +1. Choose a cluster scan profile. The profile determines which Compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Click **Create**. + +**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md similarity index 55% rename from versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md rename to versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md index 7492bc03f0b..c28eb027a4b 100644 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md @@ -3,36 +3,36 @@ title: Skip Tests --- - + -CIS scans can be run using test profiles with user-defined skips. +Compliancescans can be run using test profiles with user-defined skips. -To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. +To skip tests, you will create a custom Compliancescan profile. A profile contains the configuration for the Compliancescan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Profile**. -1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: +1. On the **Clusters** page, go to the cluster where you want to run a Compliancescan and click **Explore**. +1. Click **Compliance > Profile**. +1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant Compliance as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: ```yaml - apiVersion: cis.cattle.io/v1 + apiVersion: compliance.cattle.io/v1 kind: ClusterScanProfile metadata: annotations: meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system + meta.helm.sh/release-namespace: compliance-operator-system labels: app.kubernetes.io/managed-by: Helm name: "" spec: - benchmarkVersion: cis-1.5 + benchmarkVersion: rke2-cis-1.7 skipTests: - "1.1.20" - "1.1.21" ``` 1. Click **Create**. -**Result:** A new CIS scan profile is created. +**Result:** A new compliance profile is created. When you [run a scan](./run-a-scan.md) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance.md new file mode 100644 index 00000000000..313acf79555 --- /dev/null +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance.md @@ -0,0 +1,13 @@ +--- +title: Uninstall Rancher Compliance +--- + + + + + +1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps > Installed Apps**. +1. Go to the `compliance-operator-system` namespace and check the boxes next to `rancher-compliance-crd` and `rancher-compliance`. +1. Click **Delete** and confirm **Delete**. + +**Result:** The `rancher-compliance` application is uninstalled. \ No newline at end of file diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports.md new file mode 100644 index 00000000000..ad042390485 --- /dev/null +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports.md @@ -0,0 +1,23 @@ +--- +title: View Reports +--- + + + + + +To view the generated Compliance scan reports, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. +1. Click **Compliance > Scan**. +1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. + +One can download the report from the Scans list or from the scan detail page. + +To get the verbose version of the compliance scan results, run the following command on the cluster that was scanned. Note that the scan must be completed before this can be done. + +```console +export REPORT="scan-report-name" +kubectl get clusterscanreports.compliance.cattle.io $REPORT -o json |jq ".spec.reportJSON | fromjson" | jq -r ".actual_value_map_data" | base64 -d | gunzip | jq . +``` diff --git a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md b/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md deleted file mode 100644 index 795e64cef29..00000000000 --- a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/rbac-for-cis-scans.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Roles-based Access Control ---- - - - - - -This section describes the permissions required to use the rancher-cis-benchmark App. - -The rancher-cis-benchmark is a cluster-admin only feature by default. - -However, the `rancher-cis-benchmark` chart installs these two default `ClusterRoles`: - -- cis-admin -- cis-view - -In Rancher, only cluster owners and global administrators have `cis-admin` access by default. - -Note: If you were using the `cis-edit` role added in Rancher v2.5 setup, it has now been removed since -Rancher v2.5.2 because it essentially is same as `cis-admin`. If you happen to create any clusterrolebindings -for `cis-edit`, please update them to use `cis-admin` ClusterRole instead. - -## Cluster-Admin Access - -Rancher CIS Scans is a cluster-admin only feature by default. -This means only the Rancher global admins, and the cluster’s cluster-owner can: - -- Install/Uninstall the rancher-cis-benchmark App -- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans -- List the default ClusterScanBenchmarks and ClusterScanProfiles -- Create/Edit/Delete new ClusterScanProfiles -- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster -- View and Download the ClusterScanReport created after the ClusterScan is complete - - -## Summary of Default Permissions for Kubernetes Default Roles - -The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`: - -| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role -| ------------------------------| ---------------------------| ---------------------------| -| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR -| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR - - -By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature. - -The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-cis-benchmark resources. - -But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above CIS ClusterRoles manually. -There is no automatic role aggregation supported for the `rancher-cis-benchmark` ClusterRoles. diff --git a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/cis-scans.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md similarity index 90% rename from versioned_docs/version-2.12/integrations-in-rancher/cis-scans/cis-scans.md rename to versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md index f170f997d66..06663aaf770 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/cis-scans.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md @@ -1,14 +1,14 @@ --- -title: CIS Scans +title: Compliance Scans --- - + -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. +Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The Compliance scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. -The `rancher-cis-benchmark` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. +The `rancher-compliance` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. ## About the CIS Benchmark @@ -94,7 +94,7 @@ In order to pass the "Hardened" profile, you will need to follow the steps on th The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned: -The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. +The `rancher-compliance` supports the CIS 1.6 Benchmark version. - For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default. - EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters. @@ -103,7 +103,7 @@ The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. ## About Skipped and Not Applicable Tests -For a list of skipped and not applicable tests, refer to [this page](../../how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md). +For a list of skipped and not applicable tests, refer to [this page](../../how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md). For now, only user-defined skipped tests are marked as skipped in the generated report. @@ -111,7 +111,7 @@ Any skipped tests that are defined as being skipped by one of the default profil ## Roles-based Access Control -For information about permissions, refer to [this page](rbac-for-cis-scans.md) +For information about permissions, refer to [this page](rbac-for-compliance-scans.md) ## Configuration @@ -119,4 +119,4 @@ For more information about configuring the custom resources for the scans, profi ## How-to Guides -Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) to learn how to run CIS scans. +Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run CIS scans. \ No newline at end of file diff --git a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/configuration-reference.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/configuration-reference.md similarity index 68% rename from versioned_docs/version-2.12/integrations-in-rancher/cis-scans/configuration-reference.md rename to versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/configuration-reference.md index 3394bc2702b..9ad9c40b18e 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/configuration-reference.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/configuration-reference.md @@ -3,27 +3,27 @@ title: Configuration --- - + -This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. +This configuration reference is intended to help you manage the custom resources created by the `rancher-compliance` application. These resources are used for performing compliance scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. -To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, +To configure the custom resources, go to the **Cluster Dashboard** To configure the compliance scans, 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark**. +1. On the **Clusters** page, go to the cluster where you want to configure compliance scans and click **Explore**. +1. In the left navigation bar, click **Compliance**. ## Scans -A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. +A scan is created to trigger a compliance scan on the cluster based on the defined profile. A report is created after the scan is completed. When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive. An example ClusterScan custom resource is below: ```yaml -apiVersion: cis.cattle.io/v1 +apiVersion: compliance.cattle.io/v1 kind: ClusterScan metadata: name: rke-cis @@ -33,11 +33,11 @@ spec: ## Profiles -A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. +A profile contains the configuration for the compliance scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. :::caution -By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. +By default, a few ClusterScanProfiles are installed as part of the `rancher-compliance` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. ::: @@ -50,12 +50,12 @@ When you create a new profile, you will also need to give it a name. An example `ClusterScanProfile` is below: ```yaml -apiVersion: cis.cattle.io/v1 +apiVersion: compliance.cattle.io/v1 kind: ClusterScanProfile metadata: annotations: meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system + meta.helm.sh/release-namespace: compliance-operator-system labels: app.kubernetes.io/managed-by: Helm name: "" @@ -70,7 +70,7 @@ spec: A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. -A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. +A `ClusterScanBenchmark` defines the Compliance `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. @@ -89,12 +89,12 @@ A ClusterScanBenchmark consists of the fields: An example `ClusterScanBenchmark` is below: ```yaml -apiVersion: cis.cattle.io/v1 +apiVersion: compliance.cattle.io/v1 kind: ClusterScanBenchmark metadata: annotations: meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system + meta.helm.sh/release-namespace: compliance-operator-system creationTimestamp: "2020-08-28T18:18:07Z" generation: 1 labels: diff --git a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/custom-benchmark.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/custom-benchmark.md similarity index 85% rename from versioned_docs/version-2.12/integrations-in-rancher/cis-scans/custom-benchmark.md rename to versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/custom-benchmark.md index 4ec353cc60b..ecf83196136 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/custom-benchmark.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/custom-benchmark.md @@ -3,15 +3,15 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan --- - + -Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. -The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. +Each Benchmark Version defines a set of test configuration files that define the Compliance tests to be run by the kube-bench tool. +The `rancher-compliance` application installs a few default Benchmark Versions which are listed under Compliance application menu. But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. +It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-compliance` application. When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version. @@ -46,7 +46,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Benchmark Version**. +1. In the left navigation bar, click **Compliance > Benchmark Version**. 1. Click **Create**. 1. Enter the **Name** and a description for your custom benchmark version. 1. Choose the cluster provider that your benchmark version applies to. @@ -60,7 +60,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Profile**. +1. In the left navigation bar, click **Compliance > Profile**. 1. Click **Create**. 1. Provide a **Name** and description. In this example, we name it `foo-profile`. 1. Choose the Benchmark Version from the dropdown. @@ -74,7 +74,7 @@ To run a scan, 1. In the upper left corner, click **☰ > Cluster Management**. 1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Scan**. +1. In the left navigation bar, click **Compliance > Scan**. 1. Click **Create**. 1. Choose the new cluster scan profile. 1. Click **Create**. diff --git a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans.md new file mode 100644 index 00000000000..71348248436 --- /dev/null +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans.md @@ -0,0 +1,48 @@ +--- +title: Roles-based Access Control +--- + + + + + +This section describes the permissions required to use the rancher-compliance App. + +The rancher-compliance is a cluster-admin only feature by default. + +However, the `rancher-compliance` chart installs these two default `ClusterRoles`: + +- compliance-admin +- compliance-view + +In Rancher, only cluster owners and global administrators have `compliance-admin` access by default. + +## Cluster-Admin Access + +Rancher Compliance Scans is a cluster-admin only feature by default. +This means only the Rancher global admins, and the cluster’s cluster-owner can: + +- Install/Uninstall the rancher-compliance App +- See the navigation links for Compliance CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans +- List the default ClusterScanBenchmarks and ClusterScanProfiles +- Create/Edit/Delete new ClusterScanProfiles +- Create/Edit/Delete a new ClusterScan to run the Compliance scan on the cluster +- View and Download the ClusterScanReport created after the ClusterScan is complete + + +## Summary of Default Permissions for Kubernetes Default Roles + +The rancher-compliance creates three `ClusterRoles` and adds the Compliance CRD access to the following default K8s `ClusterRoles`: + +| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role +| ------------------------------| ---------------------------| ---------------------------| +| `compliance-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR +| `compliance-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR + + +By default only cluster-owner role will have ability to manage and use `rancher-compliance` feature. + +The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-compliance resources. + +But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above Compliance ClusterRoles manually. +There is no automatic role aggregation supported for the `rancher-compliance` ClusterRoles. diff --git a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md similarity index 99% rename from versioned_docs/version-2.12/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md rename to versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md index 3920a1588c5..965d015d9c8 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md @@ -3,7 +3,7 @@ title: Skipped and Not Applicable Tests --- - + This section lists the tests that are skipped in the permissive test profile for RKE. diff --git a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index 108689861dd..afa372b23a4 100644 --- a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -98,7 +98,7 @@ Monitoring the availability and performance of all your internal workloads is vi ## Security Monitoring -In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [CIS Scans](../../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) which check if the cluster is configured according to security best practices. +In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [CIS Scans](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) which check if the cluster is configured according to security best practices. For the workloads, you can have a look at Kubernetes and Container security solutions like [NeuVector](https://www.suse.com/products/neuvector/), [Falco](https://falco.org/), [Aqua Kubernetes Security](https://www.aquasec.com/solutions/kubernetes-container-security/), [SysDig](https://sysdig.com/). diff --git a/versioned_sidebars/version-2.12-sidebars.json b/versioned_sidebars/version-2.12-sidebars.json index 2e75050068c..3bd2e656839 100644 --- a/versioned_sidebars/version-2.12-sidebars.json +++ b/versioned_sidebars/version-2.12-sidebars.json @@ -726,21 +726,21 @@ }, { "type": "category", - "label": "CIS Scan Guides", + "label": "Compliance Scan Guides", "link": { "type": "doc", - "id": "how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides" + "id": "how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides" }, "items": [ - "how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark", - "how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark", - "how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan", - "how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule", - "how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests", - "how-to-guides/advanced-user-guides/cis-scan-guides/view-reports", - "how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark", - "how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule", - "how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run" + "how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance", + "how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance", + "how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan", + "how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule", + "how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests", + "how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports", + "how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance", + "how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule", + "how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run" ] }, { @@ -1167,16 +1167,16 @@ }, { "type": "category", - "label": "CIS Scans", + "label": "Compliance Scans", "link": { "type": "doc", - "id": "integrations-in-rancher/cis-scans/cis-scans" + "id": "integrations-in-rancher/compliance-scans/compliance-scans" }, "items": [ - "integrations-in-rancher/cis-scans/configuration-reference", - "integrations-in-rancher/cis-scans/rbac-for-cis-scans", - "integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests", - "integrations-in-rancher/cis-scans/custom-benchmark" + "integrations-in-rancher/compliance-scans/configuration-reference", + "integrations-in-rancher/compliance-scans/rbac-for-compliance-scans", + "integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests", + "integrations-in-rancher/compliance-scans/custom-benchmark" ] }, { From 3ef8fbc69051444fce9611e65eccc23a6cf9fb62 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Fri, 18 Jul 2025 11:00:33 +0530 Subject: [PATCH 25/57] refactor: update documentation & improvements --- .../compliance-scan-guides/skip-tests.md | 6 +- .../compliance-scans/compliance-scans.md | 2 +- .../configuration-reference.md | 4 +- .../compliance-scans/custom-benchmark.md | 9 +-- .../skipped-and-not-applicable-tests.md | 57 ------------------- .../monitoring-best-practices.md | 2 +- .../rke2-cluster-configuration.md | 2 +- .../reference-guides/rancher-cluster-tools.md | 6 +- .../rancher-security/rancher-security.md | 12 +--- 9 files changed, 18 insertions(+), 82 deletions(-) delete mode 100644 docs/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md index c28eb027a4b..dee5bff565a 100644 --- a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md @@ -6,12 +6,12 @@ title: Skip Tests -Compliancescans can be run using test profiles with user-defined skips. +Compliance scans can be run using test profiles with user-defined skips. -To skip tests, you will create a custom Compliancescan profile. A profile contains the configuration for the Compliancescan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. +To skip tests, you will create a custom Compliance scan profile. A profile contains the configuration for the Compliance scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a Compliancescan and click **Explore**. +1. the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. 1. Click **Compliance > Profile**. 1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant Compliance as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: diff --git a/docs/integrations-in-rancher/compliance-scans/compliance-scans.md b/docs/integrations-in-rancher/compliance-scans/compliance-scans.md index 06663aaf770..757fdc9d24b 100644 --- a/docs/integrations-in-rancher/compliance-scans/compliance-scans.md +++ b/docs/integrations-in-rancher/compliance-scans/compliance-scans.md @@ -119,4 +119,4 @@ For more information about configuring the custom resources for the scans, profi ## How-to Guides -Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run CIS scans. \ No newline at end of file +Please refer to the [Compliance Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run Compliance scans. diff --git a/docs/integrations-in-rancher/compliance-scans/configuration-reference.md b/docs/integrations-in-rancher/compliance-scans/configuration-reference.md index 9ad9c40b18e..4406a091acf 100644 --- a/docs/integrations-in-rancher/compliance-scans/configuration-reference.md +++ b/docs/integrations-in-rancher/compliance-scans/configuration-reference.md @@ -72,7 +72,7 @@ A benchmark version is the name of benchmark to run using `kube-bench`, as well A `ClusterScanBenchmark` defines the Compliance `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. -By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. +By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the Compliance scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. :::caution @@ -106,4 +106,4 @@ metadata: spec: clusterProvider: "" minKubernetesVersion: 1.15.0 -``` \ No newline at end of file +``` diff --git a/docs/integrations-in-rancher/compliance-scans/custom-benchmark.md b/docs/integrations-in-rancher/compliance-scans/custom-benchmark.md index ecf83196136..fc4bf0e3b4b 100644 --- a/docs/integrations-in-rancher/compliance-scans/custom-benchmark.md +++ b/docs/integrations-in-rancher/compliance-scans/custom-benchmark.md @@ -9,13 +9,14 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan Each Benchmark Version defines a set of test configuration files that define the Compliance tests to be run by the kube-bench tool. The `rancher-compliance` application installs a few default Benchmark Versions which are listed under Compliance application menu. -But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-compliance` application. +But in the following cases, a custom configuration or remediation may be required: -When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version. +- Non-standard file locations: When Kubernetes binaries, configuration or certificate paths deviate from upstream benchmark defaults. +Example: Unlike traditional Kubernetes, K3s bundles control plane components into a single binary. Therefore,` --anonymous-auth` flag presence and configuration should be verified in K3s' logs (`journalctl`), not via `kube-apiserver` process checks (`ps`). -Follow all the steps below to add a custom Benchmark Version and run a scan using it. +- Alternative risk mitigations: If a setup doesn't meet a check but has an equally effective compensating control with justification. Or simply is not concerned by the check requirement because of its design. +Example: By default, K3s embeds the api server within the k3s process. There is no API server pod specification file, so verifying the latter's file permissions is not required. ## 1. Prepare the Custom Benchmark Version ConfigMap diff --git a/docs/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md b/docs/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md deleted file mode 100644 index 965d015d9c8..00000000000 --- a/docs/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -title: Skipped and Not Applicable Tests ---- - - - - - -This section lists the tests that are skipped in the permissive test profile for RKE. - -> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. - -## CIS Benchmark v1.5 - -### CIS Benchmark v1.5 Skipped Tests - -| Number | Description | Reason for Skipping | -| ---------- | ------------- | --------- | -| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. | -| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | -| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 1.2.34 | Ensure that encryption providers are appropriately configured (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Automated) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. | -| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | -| 5.1.5 | Ensure that default service accounts are not actively used. (Automated) | Kubernetes provides default service accounts to be used. | -| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Automated) | Enabling Network Policies can prevent certain applications from communicating with each other. | -| 5.6.4 | The default namespace should not be used (Automated) | Kubernetes provides a default namespace. | - -### CIS Benchmark v1.5 Not Applicable Tests - -| Number | Description | Reason for being not applicable | -| ---------- | ------------- | --------- | -| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | -| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | -| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | -| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | -| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | -| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | -| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | -| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | -| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | -| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | -| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | -| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | \ No newline at end of file diff --git a/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index afa372b23a4..ffa2096211d 100644 --- a/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/docs/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -98,7 +98,7 @@ Monitoring the availability and performance of all your internal workloads is vi ## Security Monitoring -In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [CIS Scans](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) which check if the cluster is configured according to security best practices. +In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [Compliance Scans](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) which check if the cluster is configured according to security best practices. For the workloads, you can have a look at Kubernetes and Container security solutions like [NeuVector](https://www.suse.com/products/neuvector/), [Falco](https://falco.org/), [Aqua Kubernetes Security](https://www.aquasec.com/solutions/kubernetes-container-security/), [SysDig](https://sysdig.com/). diff --git a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md index 38eb81d2530..44d7d22123b 100644 --- a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md +++ b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md @@ -133,7 +133,7 @@ If the cloud provider you want to use is not listed as an option, you will need The default [pod security admission configuration template](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for the cluster. -##### Worker compliance Profile +##### Worker Compliance Profile Select a [compliance benchmark](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to validate the system configuration against. diff --git a/docs/reference-guides/rancher-cluster-tools.md b/docs/reference-guides/rancher-cluster-tools.md index 739abf9af26..b89ff4a1299 100644 --- a/docs/reference-guides/rancher-cluster-tools.md +++ b/docs/reference-guides/rancher-cluster-tools.md @@ -42,8 +42,8 @@ Rancher's integration with Istio was improved in Rancher v2.5. For more information, refer to the Istio documentation [here.](../integrations-in-rancher/istio/istio.md) -## CIS Scans +## Compliance Scans -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. +Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in most recognized Kubernetes Security Benchmarks, such as STIG. -For more information, refer to the Compliance scan documentation [here.](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) \ No newline at end of file +For more information, refer to the Compliance scan documentation [here.](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) diff --git a/docs/reference-guides/rancher-security/rancher-security.md b/docs/reference-guides/rancher-security/rancher-security.md index 21c67f25507..a39be982169 100644 --- a/docs/reference-guides/rancher-security/rancher-security.md +++ b/docs/reference-guides/rancher-security/rancher-security.md @@ -33,17 +33,9 @@ NeuVector is an open-source, container-focused security application that is now ## Running a Compliance Security Scan on a Kubernetes Cluster -Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. +Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices. -The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes. - -The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". - -CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. - -The Benchmark provides recommendations of two types: Automated and Manual. We run tests related to only Automated recommendations. - -When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. +When Rancher runs a Compliance scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md). ` From e4e911a1b4d16677c10096da05ddfea3bf37bc21 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Fri, 18 Jul 2025 11:11:44 +0530 Subject: [PATCH 26/57] refactor: update documentation & improvements for 2.12 docs --- .../compliance-scan-guides/skip-tests.md | 6 +- .../compliance-scans/compliance-scans.md | 2 +- .../configuration-reference.md | 4 +- .../compliance-scans/custom-benchmark.md | 9 +-- .../skipped-and-not-applicable-tests.md | 57 ------------------- .../monitoring-best-practices.md | 2 +- .../rke2-cluster-configuration.md | 4 +- .../monitoring-v2-configuration/receivers.md | 12 ++-- .../reference-guides/rancher-cluster-tools.md | 6 +- .../rancher-security/rancher-security.md | 18 ++---- 10 files changed, 28 insertions(+), 92 deletions(-) delete mode 100644 versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md index c28eb027a4b..dee5bff565a 100644 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md @@ -6,12 +6,12 @@ title: Skip Tests -Compliancescans can be run using test profiles with user-defined skips. +Compliance scans can be run using test profiles with user-defined skips. -To skip tests, you will create a custom Compliancescan profile. A profile contains the configuration for the Compliancescan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. +To skip tests, you will create a custom Compliance scan profile. A profile contains the configuration for the Compliance scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. 1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a Compliancescan and click **Explore**. +1. the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. 1. Click **Compliance > Profile**. 1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant Compliance as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: diff --git a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md index 06663aaf770..757fdc9d24b 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md @@ -119,4 +119,4 @@ For more information about configuring the custom resources for the scans, profi ## How-to Guides -Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run CIS scans. \ No newline at end of file +Please refer to the [Compliance Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run Compliance scans. diff --git a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/configuration-reference.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/configuration-reference.md index 9ad9c40b18e..4406a091acf 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/configuration-reference.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/configuration-reference.md @@ -72,7 +72,7 @@ A benchmark version is the name of benchmark to run using `kube-bench`, as well A `ClusterScanBenchmark` defines the Compliance `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. -By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. +By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the Compliance scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. :::caution @@ -106,4 +106,4 @@ metadata: spec: clusterProvider: "" minKubernetesVersion: 1.15.0 -``` \ No newline at end of file +``` diff --git a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/custom-benchmark.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/custom-benchmark.md index ecf83196136..fc4bf0e3b4b 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/custom-benchmark.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/custom-benchmark.md @@ -9,13 +9,14 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan Each Benchmark Version defines a set of test configuration files that define the Compliance tests to be run by the kube-bench tool. The `rancher-compliance` application installs a few default Benchmark Versions which are listed under Compliance application menu. -But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-compliance` application. +But in the following cases, a custom configuration or remediation may be required: -When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version. +- Non-standard file locations: When Kubernetes binaries, configuration or certificate paths deviate from upstream benchmark defaults. +Example: Unlike traditional Kubernetes, K3s bundles control plane components into a single binary. Therefore,` --anonymous-auth` flag presence and configuration should be verified in K3s' logs (`journalctl`), not via `kube-apiserver` process checks (`ps`). -Follow all the steps below to add a custom Benchmark Version and run a scan using it. +- Alternative risk mitigations: If a setup doesn't meet a check but has an equally effective compensating control with justification. Or simply is not concerned by the check requirement because of its design. +Example: By default, K3s embeds the api server within the k3s process. There is no API server pod specification file, so verifying the latter's file permissions is not required. ## 1. Prepare the Custom Benchmark Version ConfigMap diff --git a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md deleted file mode 100644 index 965d015d9c8..00000000000 --- a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -title: Skipped and Not Applicable Tests ---- - - - - - -This section lists the tests that are skipped in the permissive test profile for RKE. - -> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. - -## CIS Benchmark v1.5 - -### CIS Benchmark v1.5 Skipped Tests - -| Number | Description | Reason for Skipping | -| ---------- | ------------- | --------- | -| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. | -| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | -| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 1.2.34 | Ensure that encryption providers are appropriately configured (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Automated) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. | -| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | -| 5.1.5 | Ensure that default service accounts are not actively used. (Automated) | Kubernetes provides default service accounts to be used. | -| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Automated) | Enabling Network Policies can prevent certain applications from communicating with each other. | -| 5.6.4 | The default namespace should not be used (Automated) | Kubernetes provides a default namespace. | - -### CIS Benchmark v1.5 Not Applicable Tests - -| Number | Description | Reason for being not applicable | -| ---------- | ------------- | --------- | -| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | -| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | -| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | -| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | -| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | -| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | -| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | -| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | -| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | -| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | -| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | -| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | \ No newline at end of file diff --git a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md index afa372b23a4..ffa2096211d 100644 --- a/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md +++ b/versioned_docs/version-2.12/reference-guides/best-practices/rancher-managed-clusters/monitoring-best-practices.md @@ -98,7 +98,7 @@ Monitoring the availability and performance of all your internal workloads is vi ## Security Monitoring -In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [CIS Scans](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) which check if the cluster is configured according to security best practices. +In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [Compliance Scans](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) which check if the cluster is configured according to security best practices. For the workloads, you can have a look at Kubernetes and Container security solutions like [NeuVector](https://www.suse.com/products/neuvector/), [Falco](https://falco.org/), [Aqua Kubernetes Security](https://www.aquasec.com/solutions/kubernetes-container-security/), [SysDig](https://sysdig.com/). diff --git a/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md b/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md index a76cb30552d..44d7d22123b 100644 --- a/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md +++ b/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md @@ -133,9 +133,9 @@ If the cloud provider you want to use is not listed as an option, you will need The default [pod security admission configuration template](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for the cluster. -##### Worker CIS Profile +##### Worker Compliance Profile -Select a [CIS benchmark](../../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) to validate the system configuration against. +Select a [compliance benchmark](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to validate the system configuration against. ##### Project Network Isolation diff --git a/versioned_docs/version-2.12/reference-guides/monitoring-v2-configuration/receivers.md b/versioned_docs/version-2.12/reference-guides/monitoring-v2-configuration/receivers.md index b1237e3646b..16e3940e735 100644 --- a/versioned_docs/version-2.12/reference-guides/monitoring-v2-configuration/receivers.md +++ b/versioned_docs/version-2.12/reference-guides/monitoring-v2-configuration/receivers.md @@ -351,29 +351,29 @@ receivers: - service_key: 'database-integration-key' ``` -## Example Route Config for CIS Scan Alerts +## Example Route Config for Compliance Scan Alerts -While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. +While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. -For example, the following example route configuration could be used with a Slack receiver named `test-cis`: +For example, the following example route configuration could be used with a Slack receiver named `test-compliance`: ```yaml spec: - receiver: test-cis + receiver: test-compliance group_by: # - string group_wait: 30s group_interval: 30s repeat_interval: 30s match: - job: rancher-cis-scan + job: rancher-compliance-scan # key: string match_re: {} # key: string ``` -For more information on enabling alerting for `rancher-cis-benchmark`, see [this section.](../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md) +For more information on enabling alerting for `rancher-compliance-benchmark`, see [this section.](../../how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md) ## Trusted CA for Notifiers diff --git a/versioned_docs/version-2.12/reference-guides/rancher-cluster-tools.md b/versioned_docs/version-2.12/reference-guides/rancher-cluster-tools.md index ad46fbdd9d2..b89ff4a1299 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-cluster-tools.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-cluster-tools.md @@ -42,8 +42,8 @@ Rancher's integration with Istio was improved in Rancher v2.5. For more information, refer to the Istio documentation [here.](../integrations-in-rancher/istio/istio.md) -## CIS Scans +## Compliance Scans -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. +Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in most recognized Kubernetes Security Benchmarks, such as STIG. -For more information, refer to the CIS scan documentation [here.](../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) \ No newline at end of file +For more information, refer to the Compliance scan documentation [here.](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/rancher-security.md b/versioned_docs/version-2.12/reference-guides/rancher-security/rancher-security.md index f16699b8ac6..a39be982169 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-security/rancher-security.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-security/rancher-security.md @@ -31,22 +31,14 @@ On this page, we provide security related documentation along with resources to NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information. -## Running a CIS Security Scan on a Kubernetes Cluster +## Running a Compliance Security Scan on a Kubernetes Cluster -Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. +Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices. -The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes. - -The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". - -CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. - -The Benchmark provides recommendations of two types: Automated and Manual. We run tests related to only Automated recommendations. - -When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. - -For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md). +When Rancher runs a Compliance scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. +For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md). +` ## SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. From bddbebcfc57d7cb58dab5f9ff22bd2cafcd2eba0 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Fri, 18 Jul 2025 11:21:14 +0530 Subject: [PATCH 27/57] refactor: remove references to skipped and not applicable tests in compliance scans --- docusaurus.config.js | 4 ---- sidebars.js | 1 - versioned_sidebars/version-2.12-sidebars.json | 1 - 3 files changed, 6 deletions(-) diff --git a/docusaurus.config.js b/docusaurus.config.js index c9776ee1ff1..90582449c3d 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -1635,10 +1635,6 @@ module.exports = { from: "/integrations-in-rancher/cis-scans/rbac-for-cis-scans", to: "/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans", }, - { - from: "/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests", - to: "/integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests", - }, { from: "/integrations-in-rancher/cis-scans/custom-benchmark", to: "/integrations-in-rancher/compliance-scans/custom-benchmark", diff --git a/sidebars.js b/sidebars.js index fb285be5a75..a76ddbb4c8a 100644 --- a/sidebars.js +++ b/sidebars.js @@ -1204,7 +1204,6 @@ const sidebars = { items: [ "integrations-in-rancher/compliance-scans/configuration-reference", "integrations-in-rancher/compliance-scans/rbac-for-compliance-scans", - "integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests", "integrations-in-rancher/compliance-scans/custom-benchmark", ], }, diff --git a/versioned_sidebars/version-2.12-sidebars.json b/versioned_sidebars/version-2.12-sidebars.json index 3bd2e656839..a53714669c0 100644 --- a/versioned_sidebars/version-2.12-sidebars.json +++ b/versioned_sidebars/version-2.12-sidebars.json @@ -1175,7 +1175,6 @@ "items": [ "integrations-in-rancher/compliance-scans/configuration-reference", "integrations-in-rancher/compliance-scans/rbac-for-compliance-scans", - "integrations-in-rancher/compliance-scans/skipped-and-not-applicable-tests", "integrations-in-rancher/compliance-scans/custom-benchmark" ] }, From 6d853a984fb46fe84029c57d896728726ab3745d Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Fri, 18 Jul 2025 11:30:10 +0530 Subject: [PATCH 28/57] refactor: update link for Running Security Scans to point to compliance scan guides --- .../version-2.12/shared-files/_cluster-capabilities-table.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/versioned_docs/version-2.12/shared-files/_cluster-capabilities-table.md b/versioned_docs/version-2.12/shared-files/_cluster-capabilities-table.md index e53e3471aad..c4807fa5dba 100644 --- a/versioned_docs/version-2.12/shared-files/_cluster-capabilities-table.md +++ b/versioned_docs/version-2.12/shared-files/_cluster-capabilities-table.md @@ -8,7 +8,7 @@ | [Managing Projects, Namespaces and Workloads](../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) | ✓ | ✓ | ✓ | ✓ | | [Using App Catalogs](../how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md) | ✓ | ✓ | ✓ | ✓ | | Configuring Tools ([Alerts, Notifiers, Monitoring](../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md), [Logging](../integrations-in-rancher/logging/logging.md), [Istio](../integrations-in-rancher/istio/istio.md)) | ✓ | ✓ | ✓ | ✓ | -| [Running Security Scans](../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) | ✓ | ✓ | ✓ | ✓ | +| [Running Security Scans](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) | ✓ | ✓ | ✓ | ✓ | | [Ability to rotate certificates](../how-to-guides/new-user-guides/manage-clusters/rotate-certificates.md) | ✓ | ✓ | | | | Ability to [backup](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md) and [restore](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Rancher-launched clusters | ✓ | ✓ | | ✓4 | | [Cleaning Kubernetes components when clusters are no longer reachable from Rancher](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) | ✓ | | | | From e8d3c04c6a8b9ced7eb737d26578d3b47ef8b318 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Fri, 18 Jul 2025 11:49:02 +0530 Subject: [PATCH 29/57] refactor: remove obsolete CIS scan guide references from configuration --- docusaurus.config.js | 24 ------------------------ 1 file changed, 24 deletions(-) diff --git a/docusaurus.config.js b/docusaurus.config.js index 90582449c3d..21cb81c4cba 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -707,14 +707,6 @@ module.exports = { to: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters", from: "/pages-for-subheaders/checklist-for-production-ready-clusters", }, - { - to: "/how-to-guides/advanced-user-guides/cis-scan-guides", - from: "/pages-for-subheaders/cis-scan-guides", - }, - { - to: "/integrations-in-rancher/cis-scans", - from: "/pages-for-subheaders/cis-scans", - }, { to: "/reference-guides/cli-with-rancher", from: "/pages-for-subheaders/cli-with-rancher", @@ -1427,22 +1419,6 @@ module.exports = { to: "/integrations-in-rancher/cloud-marketplace/supportconfig", from: "/explanations/integrations-in-rancher/cloud-marketplace/supportconfig", }, - { - to: "/integrations-in-rancher/cis-scans/configuration-reference", - from: "/explanations/integrations-in-rancher/cis-scans/configuration-reference", - }, - { - to: "/integrations-in-rancher/cis-scans/rbac-for-cis-scans", - from: "/explanations/integrations-in-rancher/cis-scans/rbac-for-cis-scans", - }, - { - to: "/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests", - from: "/explanations/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests", - }, - { - to: "/integrations-in-rancher/cis-scans/custom-benchmark", - from: "/explanations/integrations-in-rancher/cis-scans/custom-benchmark", - }, { to: "/integrations-in-rancher/fleet/architecture", from: "/explanations/integrations-in-rancher/fleet-gitops-at-scale/architecture", From 28511bb76b5c0bf8cbd53a782501bf83eb71a49f Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Fri, 18 Jul 2025 13:27:23 +0530 Subject: [PATCH 30/57] refactor: remove 'Skip Tests' section and related references from compliance scan guides --- .../compliance-scan-guides.md | 1 - .../compliance-scan-guides/skip-tests.md | 38 ------------------- docusaurus.config.js | 4 -- sidebars.js | 1 - .../compliance-scan-guides.md | 1 - .../compliance-scan-guides/skip-tests.md | 38 ------------------- versioned_sidebars/version-2.12-sidebars.json | 1 - 7 files changed, 84 deletions(-) delete mode 100644 docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md delete mode 100644 versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md index 87b8a1fa1db..c90922ec778 100644 --- a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md @@ -10,7 +10,6 @@ title: Compliance Scan Guides - [Uninstall rancher-compliance](uninstall-rancher-compliance.md) - [Run a Scan](run-a-scan.md) - [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md) -- [Skip Tests](skip-tests.md) - [View Reports](view-reports.md) - [Enable Alerting for rancher-compliance](enable-alerting-for-rancher-compliance.md) - [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md) diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md deleted file mode 100644 index dee5bff565a..00000000000 --- a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Skip Tests ---- - - - - - -Compliance scans can be run using test profiles with user-defined skips. - -To skip tests, you will create a custom Compliance scan profile. A profile contains the configuration for the Compliance scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. - -1. In the upper left corner, click **☰ > Cluster Management**. -1. the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. -1. Click **Compliance > Profile**. -1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant Compliance as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: - - ```yaml - apiVersion: compliance.cattle.io/v1 - kind: ClusterScanProfile - metadata: - annotations: - meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: compliance-operator-system - labels: - app.kubernetes.io/managed-by: Helm - name: "" - spec: - benchmarkVersion: rke2-cis-1.7 - skipTests: - - "1.1.20" - - "1.1.21" - ``` -1. Click **Create**. - -**Result:** A new compliance profile is created. - -When you [run a scan](./run-a-scan.md) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. diff --git a/docusaurus.config.js b/docusaurus.config.js index 21cb81c4cba..134fba0c8c3 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -1587,10 +1587,6 @@ module.exports = { from: "/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule", to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule", }, - { - from: "/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests", - to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests", - }, { from: "/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports", to: "/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports", diff --git a/sidebars.js b/sidebars.js index a76ddbb4c8a..9aa046f49b1 100644 --- a/sidebars.js +++ b/sidebars.js @@ -772,7 +772,6 @@ const sidebars = { "how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance", "how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan", "how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule", - "how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests", "how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports", "how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance", "how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule", diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md index 87b8a1fa1db..c90922ec778 100644 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md @@ -10,7 +10,6 @@ title: Compliance Scan Guides - [Uninstall rancher-compliance](uninstall-rancher-compliance.md) - [Run a Scan](run-a-scan.md) - [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md) -- [Skip Tests](skip-tests.md) - [View Reports](view-reports.md) - [Enable Alerting for rancher-compliance](enable-alerting-for-rancher-compliance.md) - [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md) diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md deleted file mode 100644 index dee5bff565a..00000000000 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Skip Tests ---- - - - - - -Compliance scans can be run using test profiles with user-defined skips. - -To skip tests, you will create a custom Compliance scan profile. A profile contains the configuration for the Compliance scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. - -1. In the upper left corner, click **☰ > Cluster Management**. -1. the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**. -1. Click **Compliance > Profile**. -1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant Compliance as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: - - ```yaml - apiVersion: compliance.cattle.io/v1 - kind: ClusterScanProfile - metadata: - annotations: - meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: compliance-operator-system - labels: - app.kubernetes.io/managed-by: Helm - name: "" - spec: - benchmarkVersion: rke2-cis-1.7 - skipTests: - - "1.1.20" - - "1.1.21" - ``` -1. Click **Create**. - -**Result:** A new compliance profile is created. - -When you [run a scan](./run-a-scan.md) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. diff --git a/versioned_sidebars/version-2.12-sidebars.json b/versioned_sidebars/version-2.12-sidebars.json index a53714669c0..4a3ae873249 100644 --- a/versioned_sidebars/version-2.12-sidebars.json +++ b/versioned_sidebars/version-2.12-sidebars.json @@ -736,7 +736,6 @@ "how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance", "how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan", "how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule", - "how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests", "how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports", "how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance", "how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule", From c347679ebe294cf69de00c5eafd87595fa87d2f4 Mon Sep 17 00:00:00 2001 From: Krunal Hingu Date: Fri, 18 Jul 2025 19:59:25 +0530 Subject: [PATCH 31/57] refactor: remove reference to skipped and not applicable tests in compliance scans documentation --- .../compliance-scans/compliance-scans.md | 2 -- .../compliance-scans/compliance-scans.md | 2 -- 2 files changed, 4 deletions(-) diff --git a/docs/integrations-in-rancher/compliance-scans/compliance-scans.md b/docs/integrations-in-rancher/compliance-scans/compliance-scans.md index 757fdc9d24b..ad12a859b49 100644 --- a/docs/integrations-in-rancher/compliance-scans/compliance-scans.md +++ b/docs/integrations-in-rancher/compliance-scans/compliance-scans.md @@ -103,8 +103,6 @@ The `rancher-compliance` supports the CIS 1.6 Benchmark version. ## About Skipped and Not Applicable Tests -For a list of skipped and not applicable tests, refer to [this page](../../how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md). - For now, only user-defined skipped tests are marked as skipped in the generated report. Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable. diff --git a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md index 757fdc9d24b..ad12a859b49 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/compliance-scans/compliance-scans.md @@ -103,8 +103,6 @@ The `rancher-compliance` supports the CIS 1.6 Benchmark version. ## About Skipped and Not Applicable Tests -For a list of skipped and not applicable tests, refer to [this page](../../how-to-guides/advanced-user-guides/compliance-scan-guides/skip-tests.md). - For now, only user-defined skipped tests are marked as skipped in the generated report. Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable. From 1957cf01e73a5508d3462e21a403299cbceb6c61 Mon Sep 17 00:00:00 2001 From: swastik959 Date: Thu, 24 Jul 2025 10:39:38 +0530 Subject: [PATCH 32/57] addressed correction comments Signed-off-by: swastik959 --- .../compliance-scan-guides/install-rancher-compliance.md | 6 ------ .../compliance-scans/compliance-scans.md | 4 ++-- .../compliance-scan-guides/install-rancher-compliance.md | 6 ------ .../version-2.12/reference-guides/rancher-cluster-tools.md | 2 +- 4 files changed, 3 insertions(+), 15 deletions(-) diff --git a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md index d7a00786ea5..c00eab90648 100644 --- a/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md +++ b/docs/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md @@ -13,9 +13,3 @@ title: Install Rancher Compliance 1. Click **Install**. **Result:** The compliance scan application is deployed on the Kubernetes cluster. - -:::note - -If you are running Kubernetes v1.24 or earlier, and have a [Pod Security Policy](../../new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md) (PSP) hardened cluster, Compliance 4.0.0 and later disable PSPs by default. To install Compliance on a PSP-hardened cluster, set `global.psp.enabled` to `true` in the values before installing the chart. [Pod Security Admission](../../new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md) (PSA) hardened clusters aren't affected. - -::: diff --git a/docs/integrations-in-rancher/compliance-scans/compliance-scans.md b/docs/integrations-in-rancher/compliance-scans/compliance-scans.md index ad12a859b49..962fceb671b 100644 --- a/docs/integrations-in-rancher/compliance-scans/compliance-scans.md +++ b/docs/integrations-in-rancher/compliance-scans/compliance-scans.md @@ -6,9 +6,9 @@ title: Compliance Scans -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The Compliance scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. +Rancher can run a security scan to check whether a cluster is deployed according to security best practices as defined in Kubernetes security benchmarks, such as the ones provided by STIG, BSI or CIS. The Compliance scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. -The `rancher-compliance` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. +The `rancher-compliance` app leverages kube-bench, an open-source tool from Aqua Security, to check the compliance of clusters against Kubernetes Benchmarks. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. ## About the CIS Benchmark diff --git a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md index d7a00786ea5..c00eab90648 100644 --- a/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md +++ b/versioned_docs/version-2.12/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance.md @@ -13,9 +13,3 @@ title: Install Rancher Compliance 1. Click **Install**. **Result:** The compliance scan application is deployed on the Kubernetes cluster. - -:::note - -If you are running Kubernetes v1.24 or earlier, and have a [Pod Security Policy](../../new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md) (PSP) hardened cluster, Compliance 4.0.0 and later disable PSPs by default. To install Compliance on a PSP-hardened cluster, set `global.psp.enabled` to `true` in the values before installing the chart. [Pod Security Admission](../../new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md) (PSA) hardened clusters aren't affected. - -::: diff --git a/versioned_docs/version-2.12/reference-guides/rancher-cluster-tools.md b/versioned_docs/version-2.12/reference-guides/rancher-cluster-tools.md index b89ff4a1299..b6874436335 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-cluster-tools.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-cluster-tools.md @@ -44,6 +44,6 @@ For more information, refer to the Istio documentation [here.](../integrations-i ## Compliance Scans -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in most recognized Kubernetes Security Benchmarks, such as STIG. +Rancher can run a security scan to check whether a cluster is deployed according to security best practices as defined in Kubernetes security benchmarks, such as the ones provided by STIG, BSI or CIS. For more information, refer to the Compliance scan documentation [here.](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) From d1e493fec094a7620a24228295a86ad22bdca8ca Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Thu, 24 Jul 2025 08:53:23 -0700 Subject: [PATCH 33/57] Remove RKE1 references in communicating-with-downstream-user-clusters.md --- ...ommunicating-with-downstream-user-clusters.md | 16 +++------------- ...ommunicating-with-downstream-user-clusters.md | 16 +++------------- ...ommunicating-with-downstream-user-clusters.md | 16 +++------------- ...ommunicating-with-downstream-user-clusters.md | 16 +++------------- 4 files changed, 12 insertions(+), 52 deletions(-) diff --git a/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index e3dd9cb475e..18abbf631b6 100644 --- a/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -41,7 +41,7 @@ There is one cluster controller and one cluster agent for each downstream cluste - Watches for resource changes in the downstream cluster - Brings the current state of the downstream cluster to the desired state - Configures access control policies to clusters and projects -- Provisions clusters by calling the required Docker machine drivers and Kubernetes engines, such as RKE and GKE +- Provisions clusters by calling the required Docker machine drivers and Kubernetes engines, such as GKE By default, to enable Rancher to communicate with a downstream cluster, the cluster controller connects to the cluster agent. If the cluster agent is not available, the cluster controller can connect to a [node agent](#3-node-agents) instead. @@ -62,7 +62,7 @@ The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/do An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy. -> ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS. +> ACE is available on RKE2 and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS. There are two main reasons why a user might need the authorized cluster endpoint: @@ -178,11 +178,7 @@ If you see an error related to "impersonation" in the UI, pay close attention to The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster: -- `rancher-cluster.yml`: The RKE cluster configuration file. - `kube_config_rancher-cluster.yml`: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. -- `rancher-cluster.rkestate`: The Kubernetes cluster state file. This file contains credentials for full access to the cluster. Note: This state file is only created when using RKE v0.2.0 or higher. - -> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) documentation. @@ -194,13 +190,7 @@ The tools that Rancher uses to provision downstream user clusters depends on the Rancher can dynamically provision nodes in a provider such as Amazon EC2, DigitalOcean, Azure, or vSphere, then install Kubernetes on them. -Rancher provisions this type of cluster using [RKE](https://github.com/rancher/rke) and [docker-machine.](https://github.com/rancher/machine) - -### Rancher Launched Kubernetes for Custom Nodes - -When setting up this type of cluster, Rancher installs Kubernetes on existing nodes, which creates a custom cluster. - -Rancher provisions this type of cluster using [RKE.](https://github.com/rancher/rke) +Rancher provisions this type of cluster using [docker-machine.](https://github.com/rancher/machine) ### Hosted Kubernetes Providers diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 9d7570ba151..609bbbf60ff 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -39,7 +39,7 @@ Rancher 使用 [ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-c - 检测下游集群中的资源变化 - 将下游集群的当前状态变更到目标状态 - 配置集群和项目的访问控制策略 -- 通过调用所需的 Docker Machine 驱动和 Kubernetes 引擎(例如 RKE 和 GKE)来配置集群 +- 通过调用所需的 Docker Machine 驱动和 Kubernetes 引擎(例如,GKE)来配置集群 默认情况下,Cluster Controller 连接到 Cluster Agent,Rancher 才能与下游集群通信。如果 Cluster Agent 不可用,Cluster Controller 可以连接到 [Node Agent](#3-node-agents)。 @@ -60,7 +60,7 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中 授权集群端点(ACE)可连接到下游集群的 Kubernetes API Server,而不用通过 Rancher 认证代理调度请求。 -> 授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即只适用于 Rancher [使用 RKE](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 来配置的集群。它不适用于导入的集群,也不适用于托管在 Kubernetes 提供商中的集群(例如 Amazon 的 EKS)。 +> 授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即 [Rancher 配置的集群](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 它不适用于导入的集群,也不适用于托管在 Kubernetes 提供商中的集群(例如 Amazon 的 EKS)。 授权集群端点的主要用途: @@ -81,11 +81,7 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中 维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件: -- `rancher-cluster.yml`:RKE 集群配置文件。 - `kube_config_rancher-cluster.yml`:集群的 Kubeconfig 文件,包含完全访问集群的凭证。如果 Rancher 出现问题时,你可以使用此文件认证由 Rancher 启动的 Kubernetes 集群。 -- `rancher-cluster.rkestate`:Kubernetes 集群状态文件,文件包含用于完全访问集群的凭证。注意:仅在使用 RKE v0.2.0 或更高版本时,才会创建此该文件。 - -> **注意**:后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。 有关在没有 Rancher 认证代理和其他配置选项的情况下连接到集群的更多信息,请参见 [kubeconfig 文件](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md)。 @@ -97,13 +93,7 @@ Rancher 使用什么工具配置下游集群,取决于集群的类型。 Rancher 可以动态启动云上(如 Amazon EC2、DigitalOcean、Azure 或 vSphere 等)的节点,然后在节点上安装 Kubernetes。 -Rancher 使用 [RKE](https://github.com/rancher/rke) 和 [docker-machine](https://github.com/rancher/machine) 来配置这类型的集群。 - -### Rancher 为自定义节点启动 Kubernetes - -在配置此类集群时,Rancher 会在现有节点上安装 Kubernetes,从而创建自定义集群。 - -Rancher 使用 [RKE](https://github.com/rancher/rke) 来启动此类集群。 +Rancher 使用 [docker-machine](https://github.com/rancher/machine) 来配置这类型的集群。 ### 托管的 Kubernetes 提供商 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 9d7570ba151..609bbbf60ff 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -39,7 +39,7 @@ Rancher 使用 [ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-c - 检测下游集群中的资源变化 - 将下游集群的当前状态变更到目标状态 - 配置集群和项目的访问控制策略 -- 通过调用所需的 Docker Machine 驱动和 Kubernetes 引擎(例如 RKE 和 GKE)来配置集群 +- 通过调用所需的 Docker Machine 驱动和 Kubernetes 引擎(例如,GKE)来配置集群 默认情况下,Cluster Controller 连接到 Cluster Agent,Rancher 才能与下游集群通信。如果 Cluster Agent 不可用,Cluster Controller 可以连接到 [Node Agent](#3-node-agents)。 @@ -60,7 +60,7 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中 授权集群端点(ACE)可连接到下游集群的 Kubernetes API Server,而不用通过 Rancher 认证代理调度请求。 -> 授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即只适用于 Rancher [使用 RKE](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 来配置的集群。它不适用于导入的集群,也不适用于托管在 Kubernetes 提供商中的集群(例如 Amazon 的 EKS)。 +> 授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即 [Rancher 配置的集群](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 它不适用于导入的集群,也不适用于托管在 Kubernetes 提供商中的集群(例如 Amazon 的 EKS)。 授权集群端点的主要用途: @@ -81,11 +81,7 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中 维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件: -- `rancher-cluster.yml`:RKE 集群配置文件。 - `kube_config_rancher-cluster.yml`:集群的 Kubeconfig 文件,包含完全访问集群的凭证。如果 Rancher 出现问题时,你可以使用此文件认证由 Rancher 启动的 Kubernetes 集群。 -- `rancher-cluster.rkestate`:Kubernetes 集群状态文件,文件包含用于完全访问集群的凭证。注意:仅在使用 RKE v0.2.0 或更高版本时,才会创建此该文件。 - -> **注意**:后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。 有关在没有 Rancher 认证代理和其他配置选项的情况下连接到集群的更多信息,请参见 [kubeconfig 文件](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md)。 @@ -97,13 +93,7 @@ Rancher 使用什么工具配置下游集群,取决于集群的类型。 Rancher 可以动态启动云上(如 Amazon EC2、DigitalOcean、Azure 或 vSphere 等)的节点,然后在节点上安装 Kubernetes。 -Rancher 使用 [RKE](https://github.com/rancher/rke) 和 [docker-machine](https://github.com/rancher/machine) 来配置这类型的集群。 - -### Rancher 为自定义节点启动 Kubernetes - -在配置此类集群时,Rancher 会在现有节点上安装 Kubernetes,从而创建自定义集群。 - -Rancher 使用 [RKE](https://github.com/rancher/rke) 来启动此类集群。 +Rancher 使用 [docker-machine](https://github.com/rancher/machine) 来配置这类型的集群。 ### 托管的 Kubernetes 提供商 diff --git a/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index e3dd9cb475e..18abbf631b6 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -41,7 +41,7 @@ There is one cluster controller and one cluster agent for each downstream cluste - Watches for resource changes in the downstream cluster - Brings the current state of the downstream cluster to the desired state - Configures access control policies to clusters and projects -- Provisions clusters by calling the required Docker machine drivers and Kubernetes engines, such as RKE and GKE +- Provisions clusters by calling the required Docker machine drivers and Kubernetes engines, such as GKE By default, to enable Rancher to communicate with a downstream cluster, the cluster controller connects to the cluster agent. If the cluster agent is not available, the cluster controller can connect to a [node agent](#3-node-agents) instead. @@ -62,7 +62,7 @@ The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/do An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy. -> ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS. +> ACE is available on RKE2 and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS. There are two main reasons why a user might need the authorized cluster endpoint: @@ -178,11 +178,7 @@ If you see an error related to "impersonation" in the UI, pay close attention to The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster: -- `rancher-cluster.yml`: The RKE cluster configuration file. - `kube_config_rancher-cluster.yml`: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. -- `rancher-cluster.rkestate`: The Kubernetes cluster state file. This file contains credentials for full access to the cluster. Note: This state file is only created when using RKE v0.2.0 or higher. - -> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) documentation. @@ -194,13 +190,7 @@ The tools that Rancher uses to provision downstream user clusters depends on the Rancher can dynamically provision nodes in a provider such as Amazon EC2, DigitalOcean, Azure, or vSphere, then install Kubernetes on them. -Rancher provisions this type of cluster using [RKE](https://github.com/rancher/rke) and [docker-machine.](https://github.com/rancher/machine) - -### Rancher Launched Kubernetes for Custom Nodes - -When setting up this type of cluster, Rancher installs Kubernetes on existing nodes, which creates a custom cluster. - -Rancher provisions this type of cluster using [RKE.](https://github.com/rancher/rke) +Rancher provisions this type of cluster using [docker-machine.](https://github.com/rancher/machine) ### Hosted Kubernetes Providers From 8bc89da17d3e3207cf7ab76cebb6db1a255d8f01 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Thu, 24 Jul 2025 08:58:30 -0700 Subject: [PATCH 34/57] Remove rke1-hardening-guide pages --- .../rke1-hardening-guide.md | 513 --- ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 2865 ----------------- .../rke1-hardening-guide.md | 516 --- ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 2864 ---------------- .../rke1-hardening-guide.md | 516 --- ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 2864 ---------------- .../rke1-hardening-guide.md | 513 --- ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 2865 ----------------- 8 files changed, 13516 deletions(-) delete mode 100644 docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md delete mode 100644 docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md delete mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md delete mode 100644 versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md delete mode 100644 versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md diff --git a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md deleted file mode 100644 index afa5dc0fef1..00000000000 --- a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md +++ /dev/null @@ -1,513 +0,0 @@ ---- -title: RKE Hardening Guides ---- - - - - - - - -This document provides prescriptive guidance for how to harden an RKE cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls. - -:::note -This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes. -::: - -This hardening guide is intended to be used for RKE clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: - -| Rancher Version | CIS Benchmark Version | Kubernetes Version | -|-----------------|-----------------------|------------------------------| -| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 | -| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 | -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 up to v1.26 | - -:::note -- In Benchmark v1.24 and later, check id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` might fail, as `/etc/kubernetes/ssl/kube-ca.pem` is set to 644 by default. -- In Benchmark v1.7, the `--protect-kernel-defaults` (`4.2.6`) parameter isn't required anymore, and was removed by CIS. -::: - -For more details on how to evaluate a hardened RKE cluster against the official CIS benchmark, refer to the RKE self-assessment guides for specific Kubernetes and CIS benchmark versions. - -## Host-level requirements - -### Configure Kernel Runtime Parameters - -The following `sysctl` configuration is recommended for all nodes types in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: - -```ini -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -``` - -Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. - -### Configure `etcd` user and group - -A user account and group for the **etcd** service is required to be set up before installing RKE. - -#### Create `etcd` user and group - -To create the **etcd** user and group run the following console commands. -The commands below use `52034` for **uid** and **gid** for example purposes. -Any valid unused **uid** or **gid** could also be used in lieu of `52034`. - -```bash -groupadd --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin -``` - -When deploying RKE through its cluster configuration `config.yml` file, update the `uid` and `gid` of the `etcd` user: - -```yaml -services: - etcd: - gid: 52034 - uid: 52034 -``` - -## Kubernetes runtime requirements - -### Configure `default` Service Account - -#### Set `automountServiceAccountToken` to `false` for `default` service accounts - -Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. -Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. -The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -For each namespace including `default` and `kube-system` on a standard RKE install, the `default` service account must include this value: - -```yaml -automountServiceAccountToken: false -``` - -Save the following configuration to a file called `account_update.yaml`. - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: default -automountServiceAccountToken: false -``` - -Create a bash script file called `account_update.sh`. -Be sure to `chmod +x account_update.sh` so the script has execute permissions. - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do - kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" -done -``` - -Execute this script to apply the `account_update.yaml` configuration to `default` service account in all namespaces. - -### Configure Network Policy - -#### Ensure that all Namespaces have Network Policies defined - -Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints. - -Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a container network interface (CNI) plugin must be enabled. This guide uses [Canal](https://github.com/projectcalico/canal) to provide the policy enforcement. Additional information about CNI providers can be found [here](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/). - -Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a **permissive** example is provided below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following configuration as `default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) about network policies can be found on the Kubernetes site. - -:::caution -This network policy is just an example and is not recommended for production use. -::: - -```yaml ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-allow-all -spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress -``` - -Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to `chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do - kubectl apply -f default-allow-all.yaml -n ${namespace} -done -``` - -Execute this script to apply the `default-allow-all.yaml` configuration with the **permissive** `NetworkPolicy` to all namespaces. - -## Known Limitations - -- Rancher **exec shell** and **view logs** for pods are **not** functional in a hardened setup when only a public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. - -## Reference Hardened RKE `cluster.yml` Configuration - -The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened installation of RKE. RKE [documentation](https://rancher.com/docs/rke/latest/en/installation/) provides additional details about the configuration items. This reference `cluster.yml` does not include the required `nodes` directive which will vary depending on your environment. Documentation for node configuration in RKE can be found [here](https://rancher.com/docs/rke/latest/en/config-options/nodes/). - -The example `cluster.yml` configuration file contains an Admission Configuration policy in the `services.kube-api.admission_configuration` field. This [sample](../../psa-restricted-exemptions.md) policy contains the namespace exemptions necessary for an imported RKE cluster to run properly in Rancher, similar to Rancher's pre-defined [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) policy. - -If you prefer to use RKE's default `restricted` policy, then leave the `services.kube-api.admission_configuration` field empty and set `services.pod_security_configuration` to `restricted`. See [the RKE docs](https://rke.docs.rancher.com/config-options/services/pod-security-admission) for more information. - - - - -:::note -If you intend to import an RKE cluster into Rancher, please consult the [documentation](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for how to configure the PSA to exempt Rancher system namespaces. -::: - -```yaml -# If you intend to deploy Kubernetes in an air-gapped environment, -# please consult the documentation on how to configure custom RKE images. -nodes: [] -kubernetes_version: # Define RKE version -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - event_rate_limit: - enabled: true - # Leave `pod_security_configuration` out if you are setting a - # custom policy in `admission_configuration`. Otherwise set - # it to `restricted` to use RKE's pre-defined restricted policy, - # and remove everything inside `admission_configuration` field. - # - # pod_security_configuration: restricted - # - admission_configuration: - apiVersion: apiserver.config.k8s.io/v1 - kind: AdmissionConfiguration - plugins: - - name: PodSecurity - configuration: - apiVersion: pod-security.admission.config.k8s.io/v1 - kind: PodSecurityConfiguration - defaults: - enforce: "restricted" - enforce-version: "latest" - audit: "restricted" - audit-version: "latest" - warn: "restricted" - warn-version: "latest" - exemptions: - usernames: [] - runtimeClasses: [] - namespaces: [calico-apiserver, - calico-system, - cattle-alerting, - cattle-csp-adapter-system, - cattle-elemental-system, - cattle-epinio-system, - cattle-externalip-system, - cattle-fleet-local-system, - cattle-fleet-system, - cattle-gatekeeper-system, - cattle-global-data, - cattle-global-nt, - cattle-impersonation-system, - cattle-istio, - cattle-istio-system, - cattle-logging, - cattle-logging-system, - cattle-monitoring-system, - cattle-neuvector-system, - cattle-prometheus, - cattle-provisioning-capi-system, - cattle-resources-system, - cattle-sriov-system, - cattle-system, - cattle-ui-plugin-system, - cattle-windows-gmsa-system, - cert-manager, - cis-operator-system, - fleet-default, - ingress-nginx, - istio-system, - kube-node-lease, - kube-public, - kube-system, - longhorn-system, - rancher-alerting-drivers, - security-scan, - tigera-operator] - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - generate_serving_certificate: true -addons: | - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -``` - - - - -```yaml -# If you intend to deploy Kubernetes in an air-gapped environment, -# please consult the documentation on how to configure custom RKE images. -nodes: [] -kubernetes_version: # Define RKE version -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: true - generate_serving_certificate: true -addons: | - # Upstream Kubernetes restricted PSP policy - # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted-noroot - spec: - privileged: false - # Required to prevent escalations to root. - allowPrivilegeEscalation: false - requiredDropCapabilities: - - ALL - # Allow core volume types. - volumes: - - 'configMap' - - 'emptyDir' - - 'projected' - - 'secret' - - 'downwardAPI' - # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. - - 'csi' - - 'persistentVolumeClaim' - - 'ephemeral' - hostNetwork: false - hostIPC: false - hostPID: false - runAsUser: - # Require the container to run without root privileges. - rule: 'MustRunAsNonRoot' - seLinux: - # This policy assumes the nodes are using AppArmor rather than SELinux. - rule: 'RunAsAny' - supplementalGroups: - rule: 'MustRunAs' - ranges: - # Forbid adding the root group. - - min: 1 - max: 65535 - fsGroup: - rule: 'MustRunAs' - ranges: - # Forbid adding the root group. - - min: 1 - max: 65535 - readOnlyRootFilesystem: false - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted-noroot - rules: - - apiGroups: - - extensions - resourceNames: - - restricted-noroot - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted-noroot - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted-noroot - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -``` - - - - -## Reference Hardened RKE Cluster Template Configuration - -The reference RKE cluster template provides the minimum required configuration to achieve a hardened installation of Kubernetes. RKE templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher [documentation](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) for additional information about installing RKE and its template details. - - - - -```yaml -# -# Cluster Config -# -default_pod_security_admission_configuration_template_name: rancher-restricted -enable_network_policy: true -local_cluster_auth_endpoint: - enabled: true -name: # Define cluster name - -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - authentication: - strategy: x509|webhook - kubernetes_version: # Define RKE version - services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: false - secrets_encryption_config: - enabled: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - generate_serving_certificate: true - scheduler: - extra_args: - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - - - - -```yaml -# -# Cluster Config -# -default_pod_security_policy_template_id: restricted-noroot -enable_network_policy: true -local_cluster_auth_endpoint: - enabled: true -name: # Define cluster name - -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - authentication: - strategy: x509|webhook - kubernetes_version: # Define RKE version - services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - secrets_encryption_config: - enabled: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - generate_serving_certificate: true - scheduler: - extra_args: - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - - - - -## Conclusion - -If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster. diff --git a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md deleted file mode 100644 index ac002a20369..00000000000 --- a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md +++ /dev/null @@ -1,2865 +0,0 @@ ---- -title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 ---- - - - - - - - -This document is a companion to the [RKE Hardening Guide](rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark. - - -This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes: - -| Rancher Version | CIS Benchmark Version | Kubernetes Version | -|-----------------|-----------------------|--------------------| -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | - -This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`. - -This document is for Rancher operators, security teams, auditors and decision makers. - -For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). - -## Testing Methodology - -Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. - -Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. - -:::note - -This guide only covers `automated` (previously called `scored`) tests. - -::: - -### Controls - -## 1.1 Control Plane Node Configuration Files -### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the -control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a -``` - -**Expected Result**: - -```console -'permissions' is present -``` - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 -``` - -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd -``` - -### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown -R root:root /etc/kubernetes/pki/ - -**Audit Script:** `check_files_owner_in_dir.sh` - -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` - -**Audit Execution:** - -```bash -./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---anonymous-auth=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the --token-auth-file= parameter. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the `DenyServiceExternalIPs` -from enabled admission plugins. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= -When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'Node' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, -for example `--authorization-mode=Node,RBAC`. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'AlwaysPullImages' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--secure-port' is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.17 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example, ---audit-log-path=/var/log/apiserver/audit.log - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxage parameter to 30 -or as an appropriate number of days, for example, ---audit-log-maxage=30 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxage' is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. For example, ---audit-log-maxbackup=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxbackup' is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB, --audit-log-maxsize=100 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxsize' is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, --request-timeout=300s - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --service-account-key-file parameter -to the public key file for service accounts. For example, ---service-account-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the client certificate authority file. ---client-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate authority file parameter. ---etcd-cafile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --encryption-provider-config parameter to the path of that file. -For example, --encryption-provider-config= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - -**Audit:** - -```bash -ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi -``` - -**Expected Result**: - -```console -'provider' is present -``` - -### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, -TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, -TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, -TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example, --terminated-pod-gc-threshold=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node to set the below parameter. ---use-service-account-credentials=true - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--use-service-account-credentials' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true -Cluster provisioned by RKE handles certificate rotation directly through RKE. - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -## 2 Etcd Node Configuration -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.1.2 Service account token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of service account tokens. - -### 3.1.3 Bootstrap token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of bootstrap tokens. - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - - -**Result:** warn - -**Remediation:** -Review the audit policy provided for the cluster and ensure that it covers -at least the following areas, -- Access to Secrets managed by the cluster. Care should be taken to only - log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in - order to avoid risk of logging sensitive data. -- Modification of Pod and Deployment objects. -- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. -For most requests, minimally logging at the Metadata level is recommended -(the most basic level of logging). - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. - All configuration is passed in as arguments at container run time. - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) - - -**Result:** fail - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 600 - -**Audit:** - -```bash -stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=644 -``` - -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - -**Audit:** - -```bash -stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 600 /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -## 4.2 Kubelet -### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to -`false`. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -`--anonymous-auth=false` -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--read-only-port' is equal to '0' OR '--read-only-port' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - -### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--event-qps' is greater or equal to 0 OR '--event-qps' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `tlsCertFile` to the location -of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites` to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) - - -**Result:** warn - -**Remediation:** -Decide on an appropriate level for this parameter and set it, -either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--pod-max-pids' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - -### 5.1.2 Minimize access to secrets (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to Secret objects in the cluster. - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - -### 5.1.4 Minimize access to create pods (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - -### 5.1.5 Ensure that default service accounts are not actively used. (Manual) - - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - -**Audit Script:** `check_for_default_sa.sh` - -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" - -``` - -**Audit Execution:** - -```bash -./check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - -### 5.1.7 Avoid use of system:masters group (Manual) - - -**Result:** warn - -**Remediation:** -Remove the system:masters group from all users in the cluster. - -### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove the impersonate, bind and escalate rights from subjects. - -### 5.1.9 Minimize access to create persistent volumes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to PersistentVolume objects in the cluster. - -### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the proxy sub-resource of node objects. - -### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. - -### 5.1.12 Minimize access to webhook configuration objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects - -### 5.1.13 Minimize access to the service account token creation (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the token sub-resource of serviceaccount objects. - -## 5.2 Pod Security Standards -### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that either Pod Security Admission or an external policy control system is in place -for every namespace which contains user workloads. - -### 5.2.2 Minimize the admission of privileged containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of privileged containers. - -### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostPID` containers. - -### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostIPC` containers. - -### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostNetwork` containers. - -### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. - -### 5.2.7 Minimize the admission of root containers (Manual) - - -**Result:** warn - -**Remediation:** -Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` -or `MustRunAs` with the range of UIDs not including 0, is set. - -### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with the `NET_RAW` capability. - -### 5.2.9 Minimize the admission of containers with added capabilities (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that `allowedCapabilities` is not present in policies for the cluster unless -it is set to an empty array. - -### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) - - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications running on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - -### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. - -### 5.2.12 Minimize the admission of HostPath volumes (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `hostPath` volumes. - -### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers which use `hostPort` sections. - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) - - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - -### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - -## 5.4 Secrets Management -### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) - - -**Result:** warn - -**Remediation:** -If possible, rewrite application code to read Secrets from mounted secret files, rather than -from environment variables. - -### 5.4.2 Consider external secret storage (Manual) - - -**Result:** warn - -**Remediation:** -Refer to the Secrets management options offered by your cloud provider or a third-party -secrets management solution. - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) - - -**Result:** warn - -**Remediation:** -Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. -An example is as below: - securityContext: - seccompProfile: - type: RuntimeDefault - -### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a -suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker -Containers. - -### 5.7.4 The default namespace should not be used (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. - diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md deleted file mode 100644 index eaffdb72d92..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md +++ /dev/null @@ -1,516 +0,0 @@ ---- -title: RKE 加固指南 ---- - - - - - - - -本文档提供了针对生产环境的 RKE 集群进行加固的具体指导,以便在使用 Rancher 部署之前进行配置。它概述了满足信息安全中心(Center for Information Security, CIS)Kubernetes benchmark controls 所需的配置和控制。 - -:::note -这份加固指南描述了如何确保你集群中的节点安全。我们建议你在安装 Kubernetes 之前遵循本指南。 -::: - -此加固指南适用于 RKE 集群,并与以下版本的 CIS Kubernetes Benchmark、Kubernetes 和 Rancher 相关联: - -| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | -|-----------------|-----------------------|------------------------------| -| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 | -| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 | -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 至 v1.26 | - -:::note -- 在 Benchmark v1.24 及更高版本中,检查 id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` 可能会失败,因为 `/etc/kubernetes/ssl/kube-ca.pem` 默认设置为 644。 -- 在 Benchmark v1.7 中,不再需要 `--protect-kernel-defaults` (`4.2.6`) 参数,并已被 CIS 删除。 -::: - -有关如何评估加固的 RKE 集群与官方 CIS benchmark 的更多细节,请参考特定 Kubernetes 和 CIS benchmark 版本的 RKE 自我评估指南。 - -## 主机级别要求 - -### 配置 Kernel 运行时参数 - -建议对群集中的所有节点类型使用以下 `sysctl` 配置。在 `/etc/sysctl.d/90-kubelet.conf` 中设置以下参数: - -```ini -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -``` - -运行 `sysctl -p /etc/sysctl.d/90-kubelet.conf` 以启用设置。 - -### 配置 `etcd` 用户和组 - -在安装 RKE 之前,需要设置 **etcd** 服务的用户帐户和组。 - -#### 创建 `etcd` 用户和组 - -要创建 **etcd** 用户和组,请运行以下控制台命令。 -下面的命令示例中使用 `52034` 作为 **uid** 和 **gid** 。 -任何有效且未使用的 **uid** 或 **gid** 都可以代替 `52034`。 - -```bash -groupadd --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin -``` - -在通过集群配置文件 `config.yml` 部署RKE时,请更新 `etcd` 用户的 `uid` 和 `gid`: - -```yaml -services: - etcd: - gid: 52034 - uid: 52034 -``` - -## Kubernetes 运行时要求 - -### 配置 `default` Service Account - -#### 设置 `automountServiceAccountToken` 为 `false` 用于 `default` service accounts - -Kubernetes 提供了一个 default service account,供集群工作负载使用,其中没有为 pod 分配特定的 service account。 -如果需要从 pod 访问 Kubernetes API,则应为该 pod 创建特定的 service account,并向该 service account 授予权限。 -应配置 default service account,使其不提供 service account 令牌,并且不应具有任何明确的权限分配。 - -对于标准 RKE 安装上的每个命名空间(包括 `default` 和 `kube-system`),`default` service account 必须包含以下值: - -```yaml -automountServiceAccountToken: false -``` - -将以下配置保存到名为 `account_update.yaml` 的文件中。 - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: default -automountServiceAccountToken: false -``` - -创建一个名为 `account_update.yaml` 的 bash 脚本文件。 -确保执行 `chmod +x account_update.sh` 命令,以赋予脚本执行权限。 - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do - kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" -done -``` - -执行此脚本将 `account_update.yaml` 配置应用到所有命名空间中的 `default` service account。 - -### 配置网络策略 - -#### 确保所有命名空间都定义了网络策略 - -在同一个 Kubernetes 集群上运行不同的应用程序会带来风险,即某个受感染的应用程序可能会攻击相邻的应用程序。为确保容器只与其预期通信的容器进行通信,网络分段至关重要。网络策略规定了哪些 Pod 可以互相通信,以及与其他网络终端通信的方式。 - -网络策略是命名空间范围的。当在特定命名空间引入网络策略时,所有未被策略允许的流量将被拒绝。然而,如果在命名空间中没有网络策略,那么所有流量将被允许进入和离开该命名空间中的 Pod。要强制执行网络策略,必须启用容器网络接口(container network interface, CNI)插件。本指南使用 [Canal](https://github.com/projectcalico/canal) 来提供策略执行。有关 CNI 提供程序的其他信息可以在[这里](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/)找到。 - -一旦在集群上启用了 CNI 提供程序,就可以应用默认的网络策略。下面提供了一个 **permissive** 的示例供参考。如果你希望允许匹配某个命名空间中所有 Pod 的所有入站和出站流量(即使添加了策略导致某些 Pod 被视为”隔离”),你可以创建一个明确允许该命名空间中所有流量的策略。请将以下配置保存为 `default-allow-all.yaml`。有关网络策略的其他[文档](https://kubernetes.io/docs/concepts/services-networking/network-policies/)可以在 Kubernetes 站点上找到。 - -:::caution -此网络策略只是一个示例,不建议用于生产用途。 -::: - -```yaml ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-allow-all -spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress -``` - -创建一个名为 `apply_networkPolicy_to_all_ns.sh`的 Bash 脚本文件。 - -确保运行 `chmod +x apply_networkPolicy_to_all_ns.sh` 命令,以赋予脚本执行权限。 - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do - kubectl apply -f default-allow-all.yaml -n ${namespace} -done -``` - -执行此脚本以将 `default-allow-all.yaml` 配置和 **permissive** 的 `NetworkPolicy` 应用于所有命名空间。 - -## 已知限制 - -- 当注册自定义节点仅提供公共 IP 时,Rancher **exec shell** 和 **查看 pod 日志** 在加固设置中**不起作用**。 此功能需要在注册自定义节点时提供私有 IP。 - -## 加固的 RKE `cluster.yml` 配置参考 - -参考的 `cluster.yml` 文件是由 RKE CLI 使用的,它提供了实现 RKE 加固安装所需的配置。 -RKE [文档](https://rancher.com/docs/rke/latest/en/installation/)提供了有关配置项的更多详细信息。这里参考的 `cluster.yml` 不包括必需的 `nodes` 指令,因为它取决于你的环境。在 RKE 中有关节点配置的文档可以在[这里](https://rancher.com/docs/rke/latest/en/config-options/nodes/)找到。 - -示例 `cluster.yml` 配置文件中包含了一个 Admission Configuration 策略,在 `services.kube-api.admission_configuration` 字段中指定。这个[示例](../../psa-restricted-exemptions.md)策略包含了命名空间的豁免规则,这对于在Rancher中正确运行导入的RKE集群非常必要,类似于Rancher预定义的 [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) 策略。 - -如果你希望使用 RKE 的默认 `restricted` 策略,则将 `services.kube-api.admission_configuration` 字段留空,并将 `services.pod_security_configuration` 设置为 `restricted`。你可以在 [RKE 文档](https://rke.docs.rancher.com/config-options/services/pod-security-admission)中找到更多信息。 - - - - -:::note -如果你打算将一个 RKE 集群导入到 Rancher 中,请参考此[文档](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md)以了解如何配置 PSA 以豁免 Rancher 系统命名空间。 -::: - -```yaml -# 如果你打算在离线环境部署 Kubernetes, -# 请查阅文档以了解如何配置自定义的 RKE 镜像。 -nodes: [] -kubernetes_version: # 定义 RKE 版本 -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - event_rate_limit: - enabled: true - # 如果你在 `admission_configuration` 中设置了自定义策略, - # 请将 `pod_security_configuration` 字段留空。 - # 否则,将其设置为 `restricted` 以使用 RKE 预定义的受限策略, - # 并删除 `admission_configuration` 字段中的所有内容。 - # - # pod_security_configuration: restricted - # - admission_configuration: - apiVersion: apiserver.config.k8s.io/v1 - kind: AdmissionConfiguration - plugins: - - name: PodSecurity - configuration: - apiVersion: pod-security.admission.config.k8s.io/v1 - kind: PodSecurityConfiguration - defaults: - enforce: "restricted" - enforce-version: "latest" - audit: "restricted" - audit-version: "latest" - warn: "restricted" - warn-version: "latest" - exemptions: - usernames: [] - runtimeClasses: [] - namespaces: [calico-apiserver, - calico-system, - cattle-alerting, - cattle-csp-adapter-system, - cattle-elemental-system, - cattle-epinio-system, - cattle-externalip-system, - cattle-fleet-local-system, - cattle-fleet-system, - cattle-gatekeeper-system, - cattle-global-data, - cattle-global-nt, - cattle-impersonation-system, - cattle-istio, - cattle-istio-system, - cattle-logging, - cattle-logging-system, - cattle-monitoring-system, - cattle-neuvector-system, - cattle-prometheus, - cattle-provisioning-capi-system, - cattle-resources-system, - cattle-sriov-system, - cattle-system, - cattle-ui-plugin-system, - cattle-windows-gmsa-system, - cert-manager, - cis-operator-system, - fleet-default, - ingress-nginx, - istio-system, - kube-node-lease, - kube-public, - kube-system, - longhorn-system, - rancher-alerting-drivers, - security-scan, - tigera-operator] - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - generate_serving_certificate: true -addons: | - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -``` - - - - -```yaml -# 如果你打算在离线环境部署 Kubernetes, -# 请查阅文档以了解如何配置自定义的 RKE 镜像。 -nodes: [] -kubernetes_version: # 定义 RKE 版本 -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: true - generate_serving_certificate: true -addons: | - # Upstream Kubernetes restricted PSP policy - # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted-noroot - spec: - privileged: false - # Required to prevent escalations to root. - allowPrivilegeEscalation: false - requiredDropCapabilities: - - ALL - # Allow core volume types. - volumes: - - 'configMap' - - 'emptyDir' - - 'projected' - - 'secret' - - 'downwardAPI' - # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. - - 'csi' - - 'persistentVolumeClaim' - - 'ephemeral' - hostNetwork: false - hostIPC: false - hostPID: false - runAsUser: - # Require the container to run without root privileges. - rule: 'MustRunAsNonRoot' - seLinux: - # This policy assumes the nodes are using AppArmor rather than SELinux. - rule: 'RunAsAny' - supplementalGroups: - rule: 'MustRunAs' - ranges: - # Forbid adding the root group. - - min: 1 - max: 65535 - fsGroup: - rule: 'MustRunAs' - ranges: - # Forbid adding the root group. - - min: 1 - max: 65535 - readOnlyRootFilesystem: false - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted-noroot - rules: - - apiGroups: - - extensions - resourceNames: - - restricted-noroot - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted-noroot - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted-noroot - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -``` - - - - -## 加固后的 RKE 集群模板配置参考 - -参考的 RKE 集群模板提供了实现 Kubernetes 加固安装所需的最低配置。RKE 模板用于提供 Kubernetes 并定义 Rancher 设置。有关安装 RKE 及其模板详情的其他信息,请参考 Rancher [文档](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) 。 - - - - -```yaml -# -# 集群配置 -# -default_pod_security_admission_configuration_template_name: rancher-restricted -enable_network_policy: true -local_cluster_auth_endpoint: - enabled: true -name: # 定义集群名称 - -# -# Rancher 配置 -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - authentication: - strategy: x509|webhook - kubernetes_version: # 定义 RKE 版本 - services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: false - secrets_encryption_config: - enabled: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - generate_serving_certificate: true - scheduler: - extra_args: - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - - - - -```yaml -# -# 集群配置 -# -default_pod_security_policy_template_id: restricted-noroot -enable_network_policy: true -local_cluster_auth_endpoint: - enabled: true -name: # 定义集群名称 - -# -# Rancher 配置 -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - authentication: - strategy: x509|webhook - kubernetes_version: # 定义 RKE 版本 - services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - secrets_encryption_config: - enabled: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - generate_serving_certificate: true - scheduler: - extra_args: - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - - - - -## 结论 - -如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md deleted file mode 100644 index cb3a548a8b1..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md +++ /dev/null @@ -1,2864 +0,0 @@ ---- -title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 ---- - - - - - - - -本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。 - -本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes: - -| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | -|-----------------|-----------------------|--------------------| -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | - -本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。 - -本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。 - -有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。 - -## 测试方法 - -Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。 - -在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。 - -:::note - -本指南仅涵盖 `automated`(之前称为 `scored`)测试。 - -::: - -### Controls - -## 1.1 Control Plane Node Configuration Files -### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the -control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a -``` - -**Expected Result**: - -```console -'permissions' is present -``` - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 -``` - -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd -``` - -### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown -R root:root /etc/kubernetes/pki/ - -**Audit Script:** `check_files_owner_in_dir.sh` - -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` - -**Audit Execution:** - -```bash -./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---anonymous-auth=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the --token-auth-file= parameter. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the `DenyServiceExternalIPs` -from enabled admission plugins. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= -When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'Node' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, -for example `--authorization-mode=Node,RBAC`. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'AlwaysPullImages' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--secure-port' is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.17 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example, ---audit-log-path=/var/log/apiserver/audit.log - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxage parameter to 30 -or as an appropriate number of days, for example, ---audit-log-maxage=30 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxage' is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. For example, ---audit-log-maxbackup=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxbackup' is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB, --audit-log-maxsize=100 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxsize' is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, --request-timeout=300s - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --service-account-key-file parameter -to the public key file for service accounts. For example, ---service-account-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the client certificate authority file. ---client-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate authority file parameter. ---etcd-cafile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --encryption-provider-config parameter to the path of that file. -For example, --encryption-provider-config= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - -**Audit:** - -```bash -ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi -``` - -**Expected Result**: - -```console -'provider' is present -``` - -### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, -TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, -TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, -TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example, --terminated-pod-gc-threshold=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node to set the below parameter. ---use-service-account-credentials=true - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--use-service-account-credentials' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true -Cluster provisioned by RKE handles certificate rotation directly through RKE. - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -## 2 Etcd Node Configuration -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.1.2 Service account token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of service account tokens. - -### 3.1.3 Bootstrap token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of bootstrap tokens. - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - - -**Result:** warn - -**Remediation:** -Review the audit policy provided for the cluster and ensure that it covers -at least the following areas, -- Access to Secrets managed by the cluster. Care should be taken to only - log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in - order to avoid risk of logging sensitive data. -- Modification of Pod and Deployment objects. -- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. -For most requests, minimally logging at the Metadata level is recommended -(the most basic level of logging). - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. - All configuration is passed in as arguments at container run time. - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) - - -**Result:** fail - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 600 - -**Audit:** - -```bash -stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=644 -``` - -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - -**Audit:** - -```bash -stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 600 /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -## 4.2 Kubelet -### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to -`false`. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -`--anonymous-auth=false` -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--read-only-port' is equal to '0' OR '--read-only-port' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - -### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--event-qps' is greater or equal to 0 OR '--event-qps' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `tlsCertFile` to the location -of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites` to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) - - -**Result:** warn - -**Remediation:** -Decide on an appropriate level for this parameter and set it, -either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--pod-max-pids' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - -### 5.1.2 Minimize access to secrets (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to Secret objects in the cluster. - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - -### 5.1.4 Minimize access to create pods (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - -### 5.1.5 Ensure that default service accounts are not actively used. (Manual) - - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - -**Audit Script:** `check_for_default_sa.sh` - -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" - -``` - -**Audit Execution:** - -```bash -./check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - -### 5.1.7 Avoid use of system:masters group (Manual) - - -**Result:** warn - -**Remediation:** -Remove the system:masters group from all users in the cluster. - -### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove the impersonate, bind and escalate rights from subjects. - -### 5.1.9 Minimize access to create persistent volumes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to PersistentVolume objects in the cluster. - -### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the proxy sub-resource of node objects. - -### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. - -### 5.1.12 Minimize access to webhook configuration objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects - -### 5.1.13 Minimize access to the service account token creation (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the token sub-resource of serviceaccount objects. - -## 5.2 Pod Security Standards -### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that either Pod Security Admission or an external policy control system is in place -for every namespace which contains user workloads. - -### 5.2.2 Minimize the admission of privileged containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of privileged containers. - -### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostPID` containers. - -### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostIPC` containers. - -### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostNetwork` containers. - -### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. - -### 5.2.7 Minimize the admission of root containers (Manual) - - -**Result:** warn - -**Remediation:** -Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` -or `MustRunAs` with the range of UIDs not including 0, is set. - -### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with the `NET_RAW` capability. - -### 5.2.9 Minimize the admission of containers with added capabilities (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that `allowedCapabilities` is not present in policies for the cluster unless -it is set to an empty array. - -### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) - - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications running on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - -### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. - -### 5.2.12 Minimize the admission of HostPath volumes (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `hostPath` volumes. - -### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers which use `hostPort` sections. - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) - - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - -### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - -## 5.4 Secrets Management -### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) - - -**Result:** warn - -**Remediation:** -If possible, rewrite application code to read Secrets from mounted secret files, rather than -from environment variables. - -### 5.4.2 Consider external secret storage (Manual) - - -**Result:** warn - -**Remediation:** -Refer to the Secrets management options offered by your cloud provider or a third-party -secrets management solution. - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) - - -**Result:** warn - -**Remediation:** -Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. -An example is as below: - securityContext: - seccompProfile: - type: RuntimeDefault - -### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a -suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker -Containers. - -### 5.7.4 The default namespace should not be used (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. - diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md deleted file mode 100644 index eaffdb72d92..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md +++ /dev/null @@ -1,516 +0,0 @@ ---- -title: RKE 加固指南 ---- - - - - - - - -本文档提供了针对生产环境的 RKE 集群进行加固的具体指导,以便在使用 Rancher 部署之前进行配置。它概述了满足信息安全中心(Center for Information Security, CIS)Kubernetes benchmark controls 所需的配置和控制。 - -:::note -这份加固指南描述了如何确保你集群中的节点安全。我们建议你在安装 Kubernetes 之前遵循本指南。 -::: - -此加固指南适用于 RKE 集群,并与以下版本的 CIS Kubernetes Benchmark、Kubernetes 和 Rancher 相关联: - -| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | -|-----------------|-----------------------|------------------------------| -| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 | -| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 | -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 至 v1.26 | - -:::note -- 在 Benchmark v1.24 及更高版本中,检查 id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` 可能会失败,因为 `/etc/kubernetes/ssl/kube-ca.pem` 默认设置为 644。 -- 在 Benchmark v1.7 中,不再需要 `--protect-kernel-defaults` (`4.2.6`) 参数,并已被 CIS 删除。 -::: - -有关如何评估加固的 RKE 集群与官方 CIS benchmark 的更多细节,请参考特定 Kubernetes 和 CIS benchmark 版本的 RKE 自我评估指南。 - -## 主机级别要求 - -### 配置 Kernel 运行时参数 - -建议对群集中的所有节点类型使用以下 `sysctl` 配置。在 `/etc/sysctl.d/90-kubelet.conf` 中设置以下参数: - -```ini -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -``` - -运行 `sysctl -p /etc/sysctl.d/90-kubelet.conf` 以启用设置。 - -### 配置 `etcd` 用户和组 - -在安装 RKE 之前,需要设置 **etcd** 服务的用户帐户和组。 - -#### 创建 `etcd` 用户和组 - -要创建 **etcd** 用户和组,请运行以下控制台命令。 -下面的命令示例中使用 `52034` 作为 **uid** 和 **gid** 。 -任何有效且未使用的 **uid** 或 **gid** 都可以代替 `52034`。 - -```bash -groupadd --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin -``` - -在通过集群配置文件 `config.yml` 部署RKE时,请更新 `etcd` 用户的 `uid` 和 `gid`: - -```yaml -services: - etcd: - gid: 52034 - uid: 52034 -``` - -## Kubernetes 运行时要求 - -### 配置 `default` Service Account - -#### 设置 `automountServiceAccountToken` 为 `false` 用于 `default` service accounts - -Kubernetes 提供了一个 default service account,供集群工作负载使用,其中没有为 pod 分配特定的 service account。 -如果需要从 pod 访问 Kubernetes API,则应为该 pod 创建特定的 service account,并向该 service account 授予权限。 -应配置 default service account,使其不提供 service account 令牌,并且不应具有任何明确的权限分配。 - -对于标准 RKE 安装上的每个命名空间(包括 `default` 和 `kube-system`),`default` service account 必须包含以下值: - -```yaml -automountServiceAccountToken: false -``` - -将以下配置保存到名为 `account_update.yaml` 的文件中。 - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: default -automountServiceAccountToken: false -``` - -创建一个名为 `account_update.yaml` 的 bash 脚本文件。 -确保执行 `chmod +x account_update.sh` 命令,以赋予脚本执行权限。 - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do - kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" -done -``` - -执行此脚本将 `account_update.yaml` 配置应用到所有命名空间中的 `default` service account。 - -### 配置网络策略 - -#### 确保所有命名空间都定义了网络策略 - -在同一个 Kubernetes 集群上运行不同的应用程序会带来风险,即某个受感染的应用程序可能会攻击相邻的应用程序。为确保容器只与其预期通信的容器进行通信,网络分段至关重要。网络策略规定了哪些 Pod 可以互相通信,以及与其他网络终端通信的方式。 - -网络策略是命名空间范围的。当在特定命名空间引入网络策略时,所有未被策略允许的流量将被拒绝。然而,如果在命名空间中没有网络策略,那么所有流量将被允许进入和离开该命名空间中的 Pod。要强制执行网络策略,必须启用容器网络接口(container network interface, CNI)插件。本指南使用 [Canal](https://github.com/projectcalico/canal) 来提供策略执行。有关 CNI 提供程序的其他信息可以在[这里](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/)找到。 - -一旦在集群上启用了 CNI 提供程序,就可以应用默认的网络策略。下面提供了一个 **permissive** 的示例供参考。如果你希望允许匹配某个命名空间中所有 Pod 的所有入站和出站流量(即使添加了策略导致某些 Pod 被视为”隔离”),你可以创建一个明确允许该命名空间中所有流量的策略。请将以下配置保存为 `default-allow-all.yaml`。有关网络策略的其他[文档](https://kubernetes.io/docs/concepts/services-networking/network-policies/)可以在 Kubernetes 站点上找到。 - -:::caution -此网络策略只是一个示例,不建议用于生产用途。 -::: - -```yaml ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-allow-all -spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress -``` - -创建一个名为 `apply_networkPolicy_to_all_ns.sh`的 Bash 脚本文件。 - -确保运行 `chmod +x apply_networkPolicy_to_all_ns.sh` 命令,以赋予脚本执行权限。 - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do - kubectl apply -f default-allow-all.yaml -n ${namespace} -done -``` - -执行此脚本以将 `default-allow-all.yaml` 配置和 **permissive** 的 `NetworkPolicy` 应用于所有命名空间。 - -## 已知限制 - -- 当注册自定义节点仅提供公共 IP 时,Rancher **exec shell** 和 **查看 pod 日志** 在加固设置中**不起作用**。 此功能需要在注册自定义节点时提供私有 IP。 - -## 加固的 RKE `cluster.yml` 配置参考 - -参考的 `cluster.yml` 文件是由 RKE CLI 使用的,它提供了实现 RKE 加固安装所需的配置。 -RKE [文档](https://rancher.com/docs/rke/latest/en/installation/)提供了有关配置项的更多详细信息。这里参考的 `cluster.yml` 不包括必需的 `nodes` 指令,因为它取决于你的环境。在 RKE 中有关节点配置的文档可以在[这里](https://rancher.com/docs/rke/latest/en/config-options/nodes/)找到。 - -示例 `cluster.yml` 配置文件中包含了一个 Admission Configuration 策略,在 `services.kube-api.admission_configuration` 字段中指定。这个[示例](../../psa-restricted-exemptions.md)策略包含了命名空间的豁免规则,这对于在Rancher中正确运行导入的RKE集群非常必要,类似于Rancher预定义的 [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) 策略。 - -如果你希望使用 RKE 的默认 `restricted` 策略,则将 `services.kube-api.admission_configuration` 字段留空,并将 `services.pod_security_configuration` 设置为 `restricted`。你可以在 [RKE 文档](https://rke.docs.rancher.com/config-options/services/pod-security-admission)中找到更多信息。 - - - - -:::note -如果你打算将一个 RKE 集群导入到 Rancher 中,请参考此[文档](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md)以了解如何配置 PSA 以豁免 Rancher 系统命名空间。 -::: - -```yaml -# 如果你打算在离线环境部署 Kubernetes, -# 请查阅文档以了解如何配置自定义的 RKE 镜像。 -nodes: [] -kubernetes_version: # 定义 RKE 版本 -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - event_rate_limit: - enabled: true - # 如果你在 `admission_configuration` 中设置了自定义策略, - # 请将 `pod_security_configuration` 字段留空。 - # 否则,将其设置为 `restricted` 以使用 RKE 预定义的受限策略, - # 并删除 `admission_configuration` 字段中的所有内容。 - # - # pod_security_configuration: restricted - # - admission_configuration: - apiVersion: apiserver.config.k8s.io/v1 - kind: AdmissionConfiguration - plugins: - - name: PodSecurity - configuration: - apiVersion: pod-security.admission.config.k8s.io/v1 - kind: PodSecurityConfiguration - defaults: - enforce: "restricted" - enforce-version: "latest" - audit: "restricted" - audit-version: "latest" - warn: "restricted" - warn-version: "latest" - exemptions: - usernames: [] - runtimeClasses: [] - namespaces: [calico-apiserver, - calico-system, - cattle-alerting, - cattle-csp-adapter-system, - cattle-elemental-system, - cattle-epinio-system, - cattle-externalip-system, - cattle-fleet-local-system, - cattle-fleet-system, - cattle-gatekeeper-system, - cattle-global-data, - cattle-global-nt, - cattle-impersonation-system, - cattle-istio, - cattle-istio-system, - cattle-logging, - cattle-logging-system, - cattle-monitoring-system, - cattle-neuvector-system, - cattle-prometheus, - cattle-provisioning-capi-system, - cattle-resources-system, - cattle-sriov-system, - cattle-system, - cattle-ui-plugin-system, - cattle-windows-gmsa-system, - cert-manager, - cis-operator-system, - fleet-default, - ingress-nginx, - istio-system, - kube-node-lease, - kube-public, - kube-system, - longhorn-system, - rancher-alerting-drivers, - security-scan, - tigera-operator] - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - generate_serving_certificate: true -addons: | - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -``` - - - - -```yaml -# 如果你打算在离线环境部署 Kubernetes, -# 请查阅文档以了解如何配置自定义的 RKE 镜像。 -nodes: [] -kubernetes_version: # 定义 RKE 版本 -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: true - generate_serving_certificate: true -addons: | - # Upstream Kubernetes restricted PSP policy - # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted-noroot - spec: - privileged: false - # Required to prevent escalations to root. - allowPrivilegeEscalation: false - requiredDropCapabilities: - - ALL - # Allow core volume types. - volumes: - - 'configMap' - - 'emptyDir' - - 'projected' - - 'secret' - - 'downwardAPI' - # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. - - 'csi' - - 'persistentVolumeClaim' - - 'ephemeral' - hostNetwork: false - hostIPC: false - hostPID: false - runAsUser: - # Require the container to run without root privileges. - rule: 'MustRunAsNonRoot' - seLinux: - # This policy assumes the nodes are using AppArmor rather than SELinux. - rule: 'RunAsAny' - supplementalGroups: - rule: 'MustRunAs' - ranges: - # Forbid adding the root group. - - min: 1 - max: 65535 - fsGroup: - rule: 'MustRunAs' - ranges: - # Forbid adding the root group. - - min: 1 - max: 65535 - readOnlyRootFilesystem: false - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted-noroot - rules: - - apiGroups: - - extensions - resourceNames: - - restricted-noroot - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted-noroot - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted-noroot - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -``` - - - - -## 加固后的 RKE 集群模板配置参考 - -参考的 RKE 集群模板提供了实现 Kubernetes 加固安装所需的最低配置。RKE 模板用于提供 Kubernetes 并定义 Rancher 设置。有关安装 RKE 及其模板详情的其他信息,请参考 Rancher [文档](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) 。 - - - - -```yaml -# -# 集群配置 -# -default_pod_security_admission_configuration_template_name: rancher-restricted -enable_network_policy: true -local_cluster_auth_endpoint: - enabled: true -name: # 定义集群名称 - -# -# Rancher 配置 -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - authentication: - strategy: x509|webhook - kubernetes_version: # 定义 RKE 版本 - services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: false - secrets_encryption_config: - enabled: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - generate_serving_certificate: true - scheduler: - extra_args: - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - - - - -```yaml -# -# 集群配置 -# -default_pod_security_policy_template_id: restricted-noroot -enable_network_policy: true -local_cluster_auth_endpoint: - enabled: true -name: # 定义集群名称 - -# -# Rancher 配置 -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - authentication: - strategy: x509|webhook - kubernetes_version: # 定义 RKE 版本 - services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - secrets_encryption_config: - enabled: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - generate_serving_certificate: true - scheduler: - extra_args: - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - - - - -## 结论 - -如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md deleted file mode 100644 index cb3a548a8b1..00000000000 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md +++ /dev/null @@ -1,2864 +0,0 @@ ---- -title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 ---- - - - - - - - -本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。 - -本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes: - -| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | -|-----------------|-----------------------|--------------------| -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | - -本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。 - -本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。 - -有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。 - -## 测试方法 - -Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。 - -在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。 - -:::note - -本指南仅涵盖 `automated`(之前称为 `scored`)测试。 - -::: - -### Controls - -## 1.1 Control Plane Node Configuration Files -### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the -control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a -``` - -**Expected Result**: - -```console -'permissions' is present -``` - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 -``` - -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd -``` - -### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown -R root:root /etc/kubernetes/pki/ - -**Audit Script:** `check_files_owner_in_dir.sh` - -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` - -**Audit Execution:** - -```bash -./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---anonymous-auth=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the --token-auth-file= parameter. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the `DenyServiceExternalIPs` -from enabled admission plugins. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= -When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'Node' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, -for example `--authorization-mode=Node,RBAC`. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'AlwaysPullImages' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--secure-port' is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.17 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example, ---audit-log-path=/var/log/apiserver/audit.log - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxage parameter to 30 -or as an appropriate number of days, for example, ---audit-log-maxage=30 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxage' is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. For example, ---audit-log-maxbackup=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxbackup' is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB, --audit-log-maxsize=100 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxsize' is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, --request-timeout=300s - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --service-account-key-file parameter -to the public key file for service accounts. For example, ---service-account-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the client certificate authority file. ---client-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate authority file parameter. ---etcd-cafile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --encryption-provider-config parameter to the path of that file. -For example, --encryption-provider-config= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - -**Audit:** - -```bash -ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi -``` - -**Expected Result**: - -```console -'provider' is present -``` - -### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, -TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, -TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, -TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example, --terminated-pod-gc-threshold=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node to set the below parameter. ---use-service-account-credentials=true - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--use-service-account-credentials' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true -Cluster provisioned by RKE handles certificate rotation directly through RKE. - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -## 2 Etcd Node Configuration -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.1.2 Service account token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of service account tokens. - -### 3.1.3 Bootstrap token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of bootstrap tokens. - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - - -**Result:** warn - -**Remediation:** -Review the audit policy provided for the cluster and ensure that it covers -at least the following areas, -- Access to Secrets managed by the cluster. Care should be taken to only - log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in - order to avoid risk of logging sensitive data. -- Modification of Pod and Deployment objects. -- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. -For most requests, minimally logging at the Metadata level is recommended -(the most basic level of logging). - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. - All configuration is passed in as arguments at container run time. - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) - - -**Result:** fail - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 600 - -**Audit:** - -```bash -stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=644 -``` - -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - -**Audit:** - -```bash -stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 600 /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -## 4.2 Kubelet -### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to -`false`. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -`--anonymous-auth=false` -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--read-only-port' is equal to '0' OR '--read-only-port' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - -### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--event-qps' is greater or equal to 0 OR '--event-qps' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `tlsCertFile` to the location -of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites` to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) - - -**Result:** warn - -**Remediation:** -Decide on an appropriate level for this parameter and set it, -either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--pod-max-pids' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - -### 5.1.2 Minimize access to secrets (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to Secret objects in the cluster. - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - -### 5.1.4 Minimize access to create pods (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - -### 5.1.5 Ensure that default service accounts are not actively used. (Manual) - - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - -**Audit Script:** `check_for_default_sa.sh` - -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" - -``` - -**Audit Execution:** - -```bash -./check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - -### 5.1.7 Avoid use of system:masters group (Manual) - - -**Result:** warn - -**Remediation:** -Remove the system:masters group from all users in the cluster. - -### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove the impersonate, bind and escalate rights from subjects. - -### 5.1.9 Minimize access to create persistent volumes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to PersistentVolume objects in the cluster. - -### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the proxy sub-resource of node objects. - -### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. - -### 5.1.12 Minimize access to webhook configuration objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects - -### 5.1.13 Minimize access to the service account token creation (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the token sub-resource of serviceaccount objects. - -## 5.2 Pod Security Standards -### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that either Pod Security Admission or an external policy control system is in place -for every namespace which contains user workloads. - -### 5.2.2 Minimize the admission of privileged containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of privileged containers. - -### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostPID` containers. - -### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostIPC` containers. - -### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostNetwork` containers. - -### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. - -### 5.2.7 Minimize the admission of root containers (Manual) - - -**Result:** warn - -**Remediation:** -Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` -or `MustRunAs` with the range of UIDs not including 0, is set. - -### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with the `NET_RAW` capability. - -### 5.2.9 Minimize the admission of containers with added capabilities (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that `allowedCapabilities` is not present in policies for the cluster unless -it is set to an empty array. - -### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) - - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications running on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - -### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. - -### 5.2.12 Minimize the admission of HostPath volumes (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `hostPath` volumes. - -### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers which use `hostPort` sections. - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) - - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - -### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - -## 5.4 Secrets Management -### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) - - -**Result:** warn - -**Remediation:** -If possible, rewrite application code to read Secrets from mounted secret files, rather than -from environment variables. - -### 5.4.2 Consider external secret storage (Manual) - - -**Result:** warn - -**Remediation:** -Refer to the Secrets management options offered by your cloud provider or a third-party -secrets management solution. - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) - - -**Result:** warn - -**Remediation:** -Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. -An example is as below: - securityContext: - seccompProfile: - type: RuntimeDefault - -### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a -suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker -Containers. - -### 5.7.4 The default namespace should not be used (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. - diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md deleted file mode 100644 index afa5dc0fef1..00000000000 --- a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md +++ /dev/null @@ -1,513 +0,0 @@ ---- -title: RKE Hardening Guides ---- - - - - - - - -This document provides prescriptive guidance for how to harden an RKE cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls. - -:::note -This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes. -::: - -This hardening guide is intended to be used for RKE clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: - -| Rancher Version | CIS Benchmark Version | Kubernetes Version | -|-----------------|-----------------------|------------------------------| -| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 | -| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 | -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 up to v1.26 | - -:::note -- In Benchmark v1.24 and later, check id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` might fail, as `/etc/kubernetes/ssl/kube-ca.pem` is set to 644 by default. -- In Benchmark v1.7, the `--protect-kernel-defaults` (`4.2.6`) parameter isn't required anymore, and was removed by CIS. -::: - -For more details on how to evaluate a hardened RKE cluster against the official CIS benchmark, refer to the RKE self-assessment guides for specific Kubernetes and CIS benchmark versions. - -## Host-level requirements - -### Configure Kernel Runtime Parameters - -The following `sysctl` configuration is recommended for all nodes types in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: - -```ini -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -``` - -Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. - -### Configure `etcd` user and group - -A user account and group for the **etcd** service is required to be set up before installing RKE. - -#### Create `etcd` user and group - -To create the **etcd** user and group run the following console commands. -The commands below use `52034` for **uid** and **gid** for example purposes. -Any valid unused **uid** or **gid** could also be used in lieu of `52034`. - -```bash -groupadd --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin -``` - -When deploying RKE through its cluster configuration `config.yml` file, update the `uid` and `gid` of the `etcd` user: - -```yaml -services: - etcd: - gid: 52034 - uid: 52034 -``` - -## Kubernetes runtime requirements - -### Configure `default` Service Account - -#### Set `automountServiceAccountToken` to `false` for `default` service accounts - -Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. -Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. -The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -For each namespace including `default` and `kube-system` on a standard RKE install, the `default` service account must include this value: - -```yaml -automountServiceAccountToken: false -``` - -Save the following configuration to a file called `account_update.yaml`. - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: default -automountServiceAccountToken: false -``` - -Create a bash script file called `account_update.sh`. -Be sure to `chmod +x account_update.sh` so the script has execute permissions. - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do - kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" -done -``` - -Execute this script to apply the `account_update.yaml` configuration to `default` service account in all namespaces. - -### Configure Network Policy - -#### Ensure that all Namespaces have Network Policies defined - -Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints. - -Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a container network interface (CNI) plugin must be enabled. This guide uses [Canal](https://github.com/projectcalico/canal) to provide the policy enforcement. Additional information about CNI providers can be found [here](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/). - -Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a **permissive** example is provided below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following configuration as `default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) about network policies can be found on the Kubernetes site. - -:::caution -This network policy is just an example and is not recommended for production use. -::: - -```yaml ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-allow-all -spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress -``` - -Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to `chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do - kubectl apply -f default-allow-all.yaml -n ${namespace} -done -``` - -Execute this script to apply the `default-allow-all.yaml` configuration with the **permissive** `NetworkPolicy` to all namespaces. - -## Known Limitations - -- Rancher **exec shell** and **view logs** for pods are **not** functional in a hardened setup when only a public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. - -## Reference Hardened RKE `cluster.yml` Configuration - -The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened installation of RKE. RKE [documentation](https://rancher.com/docs/rke/latest/en/installation/) provides additional details about the configuration items. This reference `cluster.yml` does not include the required `nodes` directive which will vary depending on your environment. Documentation for node configuration in RKE can be found [here](https://rancher.com/docs/rke/latest/en/config-options/nodes/). - -The example `cluster.yml` configuration file contains an Admission Configuration policy in the `services.kube-api.admission_configuration` field. This [sample](../../psa-restricted-exemptions.md) policy contains the namespace exemptions necessary for an imported RKE cluster to run properly in Rancher, similar to Rancher's pre-defined [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) policy. - -If you prefer to use RKE's default `restricted` policy, then leave the `services.kube-api.admission_configuration` field empty and set `services.pod_security_configuration` to `restricted`. See [the RKE docs](https://rke.docs.rancher.com/config-options/services/pod-security-admission) for more information. - - - - -:::note -If you intend to import an RKE cluster into Rancher, please consult the [documentation](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for how to configure the PSA to exempt Rancher system namespaces. -::: - -```yaml -# If you intend to deploy Kubernetes in an air-gapped environment, -# please consult the documentation on how to configure custom RKE images. -nodes: [] -kubernetes_version: # Define RKE version -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - event_rate_limit: - enabled: true - # Leave `pod_security_configuration` out if you are setting a - # custom policy in `admission_configuration`. Otherwise set - # it to `restricted` to use RKE's pre-defined restricted policy, - # and remove everything inside `admission_configuration` field. - # - # pod_security_configuration: restricted - # - admission_configuration: - apiVersion: apiserver.config.k8s.io/v1 - kind: AdmissionConfiguration - plugins: - - name: PodSecurity - configuration: - apiVersion: pod-security.admission.config.k8s.io/v1 - kind: PodSecurityConfiguration - defaults: - enforce: "restricted" - enforce-version: "latest" - audit: "restricted" - audit-version: "latest" - warn: "restricted" - warn-version: "latest" - exemptions: - usernames: [] - runtimeClasses: [] - namespaces: [calico-apiserver, - calico-system, - cattle-alerting, - cattle-csp-adapter-system, - cattle-elemental-system, - cattle-epinio-system, - cattle-externalip-system, - cattle-fleet-local-system, - cattle-fleet-system, - cattle-gatekeeper-system, - cattle-global-data, - cattle-global-nt, - cattle-impersonation-system, - cattle-istio, - cattle-istio-system, - cattle-logging, - cattle-logging-system, - cattle-monitoring-system, - cattle-neuvector-system, - cattle-prometheus, - cattle-provisioning-capi-system, - cattle-resources-system, - cattle-sriov-system, - cattle-system, - cattle-ui-plugin-system, - cattle-windows-gmsa-system, - cert-manager, - cis-operator-system, - fleet-default, - ingress-nginx, - istio-system, - kube-node-lease, - kube-public, - kube-system, - longhorn-system, - rancher-alerting-drivers, - security-scan, - tigera-operator] - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - generate_serving_certificate: true -addons: | - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -``` - - - - -```yaml -# If you intend to deploy Kubernetes in an air-gapped environment, -# please consult the documentation on how to configure custom RKE images. -nodes: [] -kubernetes_version: # Define RKE version -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: true - generate_serving_certificate: true -addons: | - # Upstream Kubernetes restricted PSP policy - # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted-noroot - spec: - privileged: false - # Required to prevent escalations to root. - allowPrivilegeEscalation: false - requiredDropCapabilities: - - ALL - # Allow core volume types. - volumes: - - 'configMap' - - 'emptyDir' - - 'projected' - - 'secret' - - 'downwardAPI' - # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. - - 'csi' - - 'persistentVolumeClaim' - - 'ephemeral' - hostNetwork: false - hostIPC: false - hostPID: false - runAsUser: - # Require the container to run without root privileges. - rule: 'MustRunAsNonRoot' - seLinux: - # This policy assumes the nodes are using AppArmor rather than SELinux. - rule: 'RunAsAny' - supplementalGroups: - rule: 'MustRunAs' - ranges: - # Forbid adding the root group. - - min: 1 - max: 65535 - fsGroup: - rule: 'MustRunAs' - ranges: - # Forbid adding the root group. - - min: 1 - max: 65535 - readOnlyRootFilesystem: false - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted-noroot - rules: - - apiGroups: - - extensions - resourceNames: - - restricted-noroot - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted-noroot - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted-noroot - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -``` - - - - -## Reference Hardened RKE Cluster Template Configuration - -The reference RKE cluster template provides the minimum required configuration to achieve a hardened installation of Kubernetes. RKE templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher [documentation](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) for additional information about installing RKE and its template details. - - - - -```yaml -# -# Cluster Config -# -default_pod_security_admission_configuration_template_name: rancher-restricted -enable_network_policy: true -local_cluster_auth_endpoint: - enabled: true -name: # Define cluster name - -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - authentication: - strategy: x509|webhook - kubernetes_version: # Define RKE version - services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: false - secrets_encryption_config: - enabled: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - generate_serving_certificate: true - scheduler: - extra_args: - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - - - - -```yaml -# -# Cluster Config -# -default_pod_security_policy_template_id: restricted-noroot -enable_network_policy: true -local_cluster_auth_endpoint: - enabled: true -name: # Define cluster name - -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - authentication: - strategy: x509|webhook - kubernetes_version: # Define RKE version - services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - secrets_encryption_config: - enabled: true - kube-controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: true - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - generate_serving_certificate: true - scheduler: - extra_args: - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - - - - -## Conclusion - -If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster. diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md deleted file mode 100644 index ac002a20369..00000000000 --- a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md +++ /dev/null @@ -1,2865 +0,0 @@ ---- -title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 ---- - - - - - - - -This document is a companion to the [RKE Hardening Guide](rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark. - - -This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes: - -| Rancher Version | CIS Benchmark Version | Kubernetes Version | -|-----------------|-----------------------|--------------------| -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | - -This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`. - -This document is for Rancher operators, security teams, auditors and decision makers. - -For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). - -## Testing Methodology - -Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. - -Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. - -:::note - -This guide only covers `automated` (previously called `scored`) tests. - -::: - -### Controls - -## 1.1 Control Plane Node Configuration Files -### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the -control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a -``` - -**Expected Result**: - -```console -'permissions' is present -``` - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 -``` - -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd -``` - -### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown -R root:root /etc/kubernetes/pki/ - -**Audit Script:** `check_files_owner_in_dir.sh` - -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` - -**Audit Execution:** - -```bash -./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---anonymous-auth=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the --token-auth-file= parameter. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the `DenyServiceExternalIPs` -from enabled admission plugins. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= -When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'Node' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, -for example `--authorization-mode=Node,RBAC`. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'AlwaysPullImages' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--secure-port' is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.17 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example, ---audit-log-path=/var/log/apiserver/audit.log - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxage parameter to 30 -or as an appropriate number of days, for example, ---audit-log-maxage=30 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxage' is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. For example, ---audit-log-maxbackup=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxbackup' is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB, --audit-log-maxsize=100 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxsize' is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, --request-timeout=300s - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --service-account-key-file parameter -to the public key file for service accounts. For example, ---service-account-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the client certificate authority file. ---client-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate authority file parameter. ---etcd-cafile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --encryption-provider-config parameter to the path of that file. -For example, --encryption-provider-config= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - -**Audit:** - -```bash -ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi -``` - -**Expected Result**: - -```console -'provider' is present -``` - -### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, -TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, -TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, -TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example, --terminated-pod-gc-threshold=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node to set the below parameter. ---use-service-account-credentials=true - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--use-service-account-credentials' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true -Cluster provisioned by RKE handles certificate rotation directly through RKE. - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -## 2 Etcd Node Configuration -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.1.2 Service account token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of service account tokens. - -### 3.1.3 Bootstrap token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of bootstrap tokens. - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - - -**Result:** warn - -**Remediation:** -Review the audit policy provided for the cluster and ensure that it covers -at least the following areas, -- Access to Secrets managed by the cluster. Care should be taken to only - log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in - order to avoid risk of logging sensitive data. -- Modification of Pod and Deployment objects. -- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. -For most requests, minimally logging at the Metadata level is recommended -(the most basic level of logging). - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. - All configuration is passed in as arguments at container run time. - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) - - -**Result:** fail - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 600 - -**Audit:** - -```bash -stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=644 -``` - -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - -**Audit:** - -```bash -stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 600 /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -## 4.2 Kubelet -### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to -`false`. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -`--anonymous-auth=false` -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--read-only-port' is equal to '0' OR '--read-only-port' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - -### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--event-qps' is greater or equal to 0 OR '--event-qps' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `tlsCertFile` to the location -of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites` to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) - - -**Result:** warn - -**Remediation:** -Decide on an appropriate level for this parameter and set it, -either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--pod-max-pids' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - -### 5.1.2 Minimize access to secrets (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to Secret objects in the cluster. - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - -### 5.1.4 Minimize access to create pods (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - -### 5.1.5 Ensure that default service accounts are not actively used. (Manual) - - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - -**Audit Script:** `check_for_default_sa.sh` - -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" - -``` - -**Audit Execution:** - -```bash -./check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - -### 5.1.7 Avoid use of system:masters group (Manual) - - -**Result:** warn - -**Remediation:** -Remove the system:masters group from all users in the cluster. - -### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove the impersonate, bind and escalate rights from subjects. - -### 5.1.9 Minimize access to create persistent volumes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to PersistentVolume objects in the cluster. - -### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the proxy sub-resource of node objects. - -### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. - -### 5.1.12 Minimize access to webhook configuration objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects - -### 5.1.13 Minimize access to the service account token creation (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the token sub-resource of serviceaccount objects. - -## 5.2 Pod Security Standards -### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that either Pod Security Admission or an external policy control system is in place -for every namespace which contains user workloads. - -### 5.2.2 Minimize the admission of privileged containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of privileged containers. - -### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostPID` containers. - -### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostIPC` containers. - -### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostNetwork` containers. - -### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. - -### 5.2.7 Minimize the admission of root containers (Manual) - - -**Result:** warn - -**Remediation:** -Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` -or `MustRunAs` with the range of UIDs not including 0, is set. - -### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with the `NET_RAW` capability. - -### 5.2.9 Minimize the admission of containers with added capabilities (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that `allowedCapabilities` is not present in policies for the cluster unless -it is set to an empty array. - -### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) - - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications running on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - -### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. - -### 5.2.12 Minimize the admission of HostPath volumes (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `hostPath` volumes. - -### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers which use `hostPort` sections. - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) - - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - -### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - -## 5.4 Secrets Management -### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) - - -**Result:** warn - -**Remediation:** -If possible, rewrite application code to read Secrets from mounted secret files, rather than -from environment variables. - -### 5.4.2 Consider external secret storage (Manual) - - -**Result:** warn - -**Remediation:** -Refer to the Secrets management options offered by your cloud provider or a third-party -secrets management solution. - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) - - -**Result:** warn - -**Remediation:** -Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. -An example is as below: - securityContext: - seccompProfile: - type: RuntimeDefault - -### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a -suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker -Containers. - -### 5.7.4 The default namespace should not be used (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. - From 7c297ad550bd1f90888157fa1dac0a82afa6b9a3 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Thu, 24 Jul 2025 09:09:08 -0700 Subject: [PATCH 35/57] Remove RKE1 references in hardening-guides.md --- .../rancher-security/hardening-guides/hardening-guides.md | 7 ------- .../rancher-security/hardening-guides/hardening-guides.md | 7 ------- .../rancher-security/hardening-guides/hardening-guides.md | 7 ------- .../rancher-security/hardening-guides/hardening-guides.md | 7 ------- 4 files changed, 28 deletions(-) diff --git a/docs/reference-guides/rancher-security/hardening-guides/hardening-guides.md b/docs/reference-guides/rancher-security/hardening-guides/hardening-guides.md index c591b3a2fbd..1532e274eb5 100644 --- a/docs/reference-guides/rancher-security/hardening-guides/hardening-guides.md +++ b/docs/reference-guides/rancher-security/hardening-guides/hardening-guides.md @@ -12,7 +12,6 @@ Rancher provides specific security hardening guides for each supported Rancher v Rancher uses the following Kubernetes distributions: -- [**RKE**](https://rancher.com/docs/rke/latest/en/), Rancher Kubernetes Engine, is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. - [**RKE2**](https://docs.rke2.io/) is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector. - [**K3s**](https://docs.k3s.io/) is a fully conformant, lightweight Kubernetes distribution. It is easy to install, with half the memory requirement of upstream Kubernetes, all in a binary of less than 100 MB. @@ -22,12 +21,6 @@ To harden a Kubernetes cluster that's running a distribution other than those li Each self-assessment guide is accompanied by a hardening guide. These guides were tested alongside the listed Rancher releases. Each self-assessment guides was tested on a specific Kubernetes version and CIS benchmark version. If a CIS benchmark has not been validated for your Kubernetes version, you can use the existing guides until a guide for your version is added. -### RKE Guides - -| Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides | -|--------------------|-----------------------|-----------------------|------------------| -| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [Link](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [Link](rke1-hardening-guide/rke1-hardening-guide.md) | - ### RKE2 Guides | Type | Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides | diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/hardening-guides.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/hardening-guides.md index 8dc764b2f9a..799268cdd1e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/hardening-guides.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/hardening-guides.md @@ -12,7 +12,6 @@ Rancher 为每个受支持的 Rancher 版本的 Kubernetes 发行版提供了特 Rancher 使用以下 Kubernetes 发行版: -- [**RKE**](https://rancher.com/docs/rke/latest/en/)(Rancher Kubernetes Engine)是经过 CNCF 认证的 Kubernetes 发行版,完全在 Docker 容器中运行。 - [**RKE2**](https://docs.rke2.io/) 是一个完全合规的 Kubernetes 发行版,专注于安全和合规性。 - [**K3s**](https://docs.k3s.io/) 是一个完全合规的,轻量级 Kubernetes 发行版。它易于安装,内存需求只有上游 Kubernetes 的一半,所有组件都在一个小于 100 MB 的二进制文件中。 @@ -22,12 +21,6 @@ Rancher 使用以下 Kubernetes 发行版: 每个自我评估指南都附有强化指南。这些指南与列出的 Rancher 版本一起进行了测试。每个自我评估指南都在特定的 Kubernetes 版本和 CIS Benchmark 版本上进行了测试。如果 CIS Benchmark 尚未针对你的 Kubernetes 版本进行验证,你可以使用现有指南,直到添加适合你的版本的指南。 -### RKE 指南 - -| Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 | -|--------------------|-----------------------|-----------------------|------------------| -| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [链接](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [链接](rke1-hardening-guide/rke1-hardening-guide.md) | - ### RKE2 指南 | 类型 | Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/hardening-guides.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/hardening-guides.md index 8dc764b2f9a..799268cdd1e 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/hardening-guides.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/hardening-guides.md @@ -12,7 +12,6 @@ Rancher 为每个受支持的 Rancher 版本的 Kubernetes 发行版提供了特 Rancher 使用以下 Kubernetes 发行版: -- [**RKE**](https://rancher.com/docs/rke/latest/en/)(Rancher Kubernetes Engine)是经过 CNCF 认证的 Kubernetes 发行版,完全在 Docker 容器中运行。 - [**RKE2**](https://docs.rke2.io/) 是一个完全合规的 Kubernetes 发行版,专注于安全和合规性。 - [**K3s**](https://docs.k3s.io/) 是一个完全合规的,轻量级 Kubernetes 发行版。它易于安装,内存需求只有上游 Kubernetes 的一半,所有组件都在一个小于 100 MB 的二进制文件中。 @@ -22,12 +21,6 @@ Rancher 使用以下 Kubernetes 发行版: 每个自我评估指南都附有强化指南。这些指南与列出的 Rancher 版本一起进行了测试。每个自我评估指南都在特定的 Kubernetes 版本和 CIS Benchmark 版本上进行了测试。如果 CIS Benchmark 尚未针对你的 Kubernetes 版本进行验证,你可以使用现有指南,直到添加适合你的版本的指南。 -### RKE 指南 - -| Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 | -|--------------------|-----------------------|-----------------------|------------------| -| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [链接](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [链接](rke1-hardening-guide/rke1-hardening-guide.md) | - ### RKE2 指南 | 类型 | Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 | diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/hardening-guides.md b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/hardening-guides.md index ca021c8dc53..15770dae787 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/hardening-guides.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/hardening-guides.md @@ -12,7 +12,6 @@ Rancher provides specific security hardening guides for each supported Rancher v Rancher uses the following Kubernetes distributions: -- [**RKE**](https://rancher.com/docs/rke/latest/en/), Rancher Kubernetes Engine, is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. - [**RKE2**](https://docs.rke2.io/) is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector. - [**K3s**](https://docs.k3s.io/) is a fully conformant, lightweight Kubernetes distribution. It is easy to install, with half the memory requirement of upstream Kubernetes, all in a binary of less than 100 MB. @@ -22,12 +21,6 @@ To harden a Kubernetes cluster that's running a distribution other than those li Each self-assessment guide is accompanied by a hardening guide. These guides were tested alongside the listed Rancher releases. Each self-assessment guides was tested on a specific Kubernetes version and CIS benchmark version. If a CIS benchmark has not been validated for your Kubernetes version, you can use the existing guides until a guide for your version is added. -### RKE Guides - -| Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides | -|--------------------|-----------------------|-----------------------|------------------| -| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [Link](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [Link](rke1-hardening-guide/rke1-hardening-guide.md) | - ### RKE2 Guides | Type | Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides | From fbec5d7ebf0d75fb67280bb0b8e2bf2a29fcc5eb Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Thu, 24 Jul 2025 09:13:33 -0700 Subject: [PATCH 36/57] Remove RKE1 references in rancher-security-best-practices.md --- .../rancher-security/rancher-security-best-practices.md | 2 +- .../rancher-security/rancher-security-best-practices.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/reference-guides/rancher-security/rancher-security-best-practices.md b/docs/reference-guides/rancher-security/rancher-security-best-practices.md index fa958639c1f..beeb2888880 100644 --- a/docs/reference-guides/rancher-security/rancher-security-best-practices.md +++ b/docs/reference-guides/rancher-security/rancher-security-best-practices.md @@ -25,6 +25,6 @@ If you require such features, combine Layer 7 firewalls with [external authentic You should protect the following ports behind an [external load balancer](../../how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#layer-4-load-balancer) that has SSL offload enabled: - **K3s:** Port 6443, used by the Kubernetes API. -- **RKE and RKE2:** Port 6443, used by the Kubernetes API, and port 9345, used for node registration. +- **RKE2:** Port 6443, used by the Kubernetes API, and port 9345, used for node registration. These ports have TLS SAN certificates which list nodes' public IP addresses. An attacker could use that information to gain unauthorized access or monitor activity on the cluster. Protecting these ports helps mitigate against nodes' public IP addresses being disclosed to potential attackers. diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/rancher-security-best-practices.md b/versioned_docs/version-2.12/reference-guides/rancher-security/rancher-security-best-practices.md index fa958639c1f..beeb2888880 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-security/rancher-security-best-practices.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-security/rancher-security-best-practices.md @@ -25,6 +25,6 @@ If you require such features, combine Layer 7 firewalls with [external authentic You should protect the following ports behind an [external load balancer](../../how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#layer-4-load-balancer) that has SSL offload enabled: - **K3s:** Port 6443, used by the Kubernetes API. -- **RKE and RKE2:** Port 6443, used by the Kubernetes API, and port 9345, used for node registration. +- **RKE2:** Port 6443, used by the Kubernetes API, and port 9345, used for node registration. These ports have TLS SAN certificates which list nodes' public IP addresses. An attacker could use that information to gain unauthorized access or monitor activity on the cluster. Protecting these ports helps mitigate against nodes' public IP addresses being disclosed to potential attackers. From e16df6105bd4eed5fb445a6c53d90d5e56802cd5 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Thu, 24 Jul 2025 09:33:29 -0700 Subject: [PATCH 37/57] Remove RKE1 references in rancher-security.md --- docs/reference-guides/rancher-security/rancher-security.md | 2 +- .../reference-guides/rancher-security/rancher-security.md | 2 +- .../reference-guides/rancher-security/rancher-security.md | 2 +- .../reference-guides/rancher-security/rancher-security.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/reference-guides/rancher-security/rancher-security.md b/docs/reference-guides/rancher-security/rancher-security.md index f16699b8ac6..0a891b55e09 100644 --- a/docs/reference-guides/rancher-security/rancher-security.md +++ b/docs/reference-guides/rancher-security/rancher-security.md @@ -67,7 +67,7 @@ Each version of the hardening guide is intended to be used with specific version The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. -Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/). +Because Rancher installs Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/). Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md index 85c1e15e37c..c1434b4d54f 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md @@ -67,7 +67,7 @@ Rancher 加固指南基于 Date: Thu, 24 Jul 2025 22:29:19 +0530 Subject: [PATCH 38/57] [v2.12] Add Documentation for RKE1 Cluster Cleanup (#1879) * docs: add RKE1 resource validation and cleanup instructions for Rancher v2.12 upgrade * docs: update upgrade instructions to include link to pre-upgrade cleanup script for RKE1 resources * docs: update upgrade instructions * docs: update upgrade instructions for 2.12 docs * docs: add RKE1 resource validation and upgrade requirements for 2.11,2.10 & 2.9 docs * docs: added 'documentation' at the end Signed-off-by: swastik959 * Removing changes from v2.9/v2.10 aligning with uprgrade process Signed-off-by: Sunil Singh * docs: added grammar corrections Signed-off-by: swastik959 --------- Signed-off-by: swastik959 Signed-off-by: Sunil Singh Co-authored-by: swastik959 Co-authored-by: Sunil Singh --- .../upgrade-docker-installed-rancher.md | 43 +++++++++++++++++++ .../upgrade-docker-installed-rancher.md | 43 +++++++++++++++++++ .../upgrade-docker-installed-rancher.md | 43 +++++++++++++++++++ 3 files changed, 129 insertions(+) diff --git a/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md b/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md index c4618be475f..fbdc5c0356e 100644 --- a/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md +++ b/docs/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md @@ -53,6 +53,10 @@ You can obtain `` and `` by loggi ## Upgrade +:::danger +Rancher upgrades to version 2.12.0 and later will be blocked if any RKE1-related resources are detected, as the Rancher Kubernetes Engine (RKE/RKE1) is end of life as of **July 31, 2025**. For detailed cleanup and recovery steps, refer to the [RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12](#rke1-resource-validation-and-upgrade-requirements-in-rancher-v212). +::: + During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data. ### 1. Create a copy of the data from your Rancher server container @@ -388,6 +392,45 @@ See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/ Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot. +## RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12 + +Rancher v2.12.0 and later has removed support for the Rancher Kubernetes Engine (RKE/RKE1). During upgrade, Rancher validates the cluster resources and blocks the upgrade if any RKE1-related resources are detected. + +This validation affects the following resource types: + +- Clusters with `rkeConfig` (`clusters.management.cattle.io`) +- NodeTemplates (`nodetemplates.management.cattle.io`) +- ClusterTemplates (`clustertemplates.management.cattle.io`) + +This is particularly relevant for single-node Docker installations, where Rancher is not running during the upgrade. In such cases, controllers are not available to automatically clean up deprecated resources, and the upgrade process will fail early with an error listing the blocking resources. + +### 1. Pre-Upgrade (Recommended) + +Before upgrading, while Rancher is still running: + +- Run the `pre-upgrade-hook` cleanup script to delete all RKE1 clusters and templates. You can find the script in the Rancher GitHub repository: [pre-upgrade-hook.sh](https://github.com/rancher/rancher/blob/v2.12.0/chart/scripts/pre-upgrade-hook.sh). +- This allows Rancher to clean up associated resources and finalizers. + +### 2. Post-Upgrade Failure Due to Residual RKE1 Resources + +If the upgrade to Rancher v2.12.0 or later is attempted without prior cleanup of RKE1 resources: + +- The upgrade will fail and display an error listing the resource names that are preventing the upgrade. +- This occurs because Rancher includes validation to detect and block upgrades when unsupported RKE1 resources are still present. +- To proceed, [rollback](#rolling-back) to the previous Rancher version, delete the identified resources, and then retry after [manual cleanup](#manual-cleanup-after-rollback). + +:::note Helm-based Rancher +Helm-based Rancher installations are not affected by this issue, as Rancher remains available during the upgrade and can perform resource cleanup as needed. +::: + +### Manual Cleanup After Rollback + +Users should perform the following steps after rolling back to a previous Rancher version: + +- **Manually delete** the resources listed in the upgrade error message (e.g., RKE1 clusters, NodeTemplates, ClusterTemplates). +- If deletion is blocked due to **finalizers**, edit the resources and remove the `metadata.finalizers` field. +- If a **validating webhook** prevents deletion (e.g., for the `system-project`), please refer to the [Bypassing the Webhook](../../../../reference-guides/rancher-webhook.md#bypassing-the-webhook) documentation. + ## Rolling Back If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md). diff --git a/versioned_docs/version-2.11/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md b/versioned_docs/version-2.11/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md index c4618be475f..fbdc5c0356e 100644 --- a/versioned_docs/version-2.11/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md +++ b/versioned_docs/version-2.11/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md @@ -53,6 +53,10 @@ You can obtain `` and `` by loggi ## Upgrade +:::danger +Rancher upgrades to version 2.12.0 and later will be blocked if any RKE1-related resources are detected, as the Rancher Kubernetes Engine (RKE/RKE1) is end of life as of **July 31, 2025**. For detailed cleanup and recovery steps, refer to the [RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12](#rke1-resource-validation-and-upgrade-requirements-in-rancher-v212). +::: + During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data. ### 1. Create a copy of the data from your Rancher server container @@ -388,6 +392,45 @@ See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/ Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot. +## RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12 + +Rancher v2.12.0 and later has removed support for the Rancher Kubernetes Engine (RKE/RKE1). During upgrade, Rancher validates the cluster resources and blocks the upgrade if any RKE1-related resources are detected. + +This validation affects the following resource types: + +- Clusters with `rkeConfig` (`clusters.management.cattle.io`) +- NodeTemplates (`nodetemplates.management.cattle.io`) +- ClusterTemplates (`clustertemplates.management.cattle.io`) + +This is particularly relevant for single-node Docker installations, where Rancher is not running during the upgrade. In such cases, controllers are not available to automatically clean up deprecated resources, and the upgrade process will fail early with an error listing the blocking resources. + +### 1. Pre-Upgrade (Recommended) + +Before upgrading, while Rancher is still running: + +- Run the `pre-upgrade-hook` cleanup script to delete all RKE1 clusters and templates. You can find the script in the Rancher GitHub repository: [pre-upgrade-hook.sh](https://github.com/rancher/rancher/blob/v2.12.0/chart/scripts/pre-upgrade-hook.sh). +- This allows Rancher to clean up associated resources and finalizers. + +### 2. Post-Upgrade Failure Due to Residual RKE1 Resources + +If the upgrade to Rancher v2.12.0 or later is attempted without prior cleanup of RKE1 resources: + +- The upgrade will fail and display an error listing the resource names that are preventing the upgrade. +- This occurs because Rancher includes validation to detect and block upgrades when unsupported RKE1 resources are still present. +- To proceed, [rollback](#rolling-back) to the previous Rancher version, delete the identified resources, and then retry after [manual cleanup](#manual-cleanup-after-rollback). + +:::note Helm-based Rancher +Helm-based Rancher installations are not affected by this issue, as Rancher remains available during the upgrade and can perform resource cleanup as needed. +::: + +### Manual Cleanup After Rollback + +Users should perform the following steps after rolling back to a previous Rancher version: + +- **Manually delete** the resources listed in the upgrade error message (e.g., RKE1 clusters, NodeTemplates, ClusterTemplates). +- If deletion is blocked due to **finalizers**, edit the resources and remove the `metadata.finalizers` field. +- If a **validating webhook** prevents deletion (e.g., for the `system-project`), please refer to the [Bypassing the Webhook](../../../../reference-guides/rancher-webhook.md#bypassing-the-webhook) documentation. + ## Rolling Back If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md). diff --git a/versioned_docs/version-2.12/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md b/versioned_docs/version-2.12/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md index c4618be475f..fbdc5c0356e 100644 --- a/versioned_docs/version-2.12/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md +++ b/versioned_docs/version-2.12/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md @@ -53,6 +53,10 @@ You can obtain `` and `` by loggi ## Upgrade +:::danger +Rancher upgrades to version 2.12.0 and later will be blocked if any RKE1-related resources are detected, as the Rancher Kubernetes Engine (RKE/RKE1) is end of life as of **July 31, 2025**. For detailed cleanup and recovery steps, refer to the [RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12](#rke1-resource-validation-and-upgrade-requirements-in-rancher-v212). +::: + During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data. ### 1. Create a copy of the data from your Rancher server container @@ -388,6 +392,45 @@ See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/ Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot. +## RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12 + +Rancher v2.12.0 and later has removed support for the Rancher Kubernetes Engine (RKE/RKE1). During upgrade, Rancher validates the cluster resources and blocks the upgrade if any RKE1-related resources are detected. + +This validation affects the following resource types: + +- Clusters with `rkeConfig` (`clusters.management.cattle.io`) +- NodeTemplates (`nodetemplates.management.cattle.io`) +- ClusterTemplates (`clustertemplates.management.cattle.io`) + +This is particularly relevant for single-node Docker installations, where Rancher is not running during the upgrade. In such cases, controllers are not available to automatically clean up deprecated resources, and the upgrade process will fail early with an error listing the blocking resources. + +### 1. Pre-Upgrade (Recommended) + +Before upgrading, while Rancher is still running: + +- Run the `pre-upgrade-hook` cleanup script to delete all RKE1 clusters and templates. You can find the script in the Rancher GitHub repository: [pre-upgrade-hook.sh](https://github.com/rancher/rancher/blob/v2.12.0/chart/scripts/pre-upgrade-hook.sh). +- This allows Rancher to clean up associated resources and finalizers. + +### 2. Post-Upgrade Failure Due to Residual RKE1 Resources + +If the upgrade to Rancher v2.12.0 or later is attempted without prior cleanup of RKE1 resources: + +- The upgrade will fail and display an error listing the resource names that are preventing the upgrade. +- This occurs because Rancher includes validation to detect and block upgrades when unsupported RKE1 resources are still present. +- To proceed, [rollback](#rolling-back) to the previous Rancher version, delete the identified resources, and then retry after [manual cleanup](#manual-cleanup-after-rollback). + +:::note Helm-based Rancher +Helm-based Rancher installations are not affected by this issue, as Rancher remains available during the upgrade and can perform resource cleanup as needed. +::: + +### Manual Cleanup After Rollback + +Users should perform the following steps after rolling back to a previous Rancher version: + +- **Manually delete** the resources listed in the upgrade error message (e.g., RKE1 clusters, NodeTemplates, ClusterTemplates). +- If deletion is blocked due to **finalizers**, edit the resources and remove the `metadata.finalizers` field. +- If a **validating webhook** prevents deletion (e.g., for the `system-project`), please refer to the [Bypassing the Webhook](../../../../reference-guides/rancher-webhook.md#bypassing-the-webhook) documentation. + ## Rolling Back If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md). From 3bcfa53a5264e6d312a316354212d7b48a14ce46 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Thu, 24 Jul 2025 14:36:13 -0700 Subject: [PATCH 39/57] Add back removed files --- .../rke1-cluster-configuration.md | 365 +++ .../rke1-hardening-guide.md | 513 +++ ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 2864 +++++++++++++++++ .../rke1-cluster-configuration.md | 357 ++ .../rke1-hardening-guide.md | 516 +++ ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 2863 ++++++++++++++++ .../rke1-cluster-configuration.md | 357 ++ .../rke1-hardening-guide.md | 516 +++ ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 2863 ++++++++++++++++ .../rke1-cluster-configuration.md | 365 +++ .../rke1-hardening-guide.md | 513 +++ ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 2864 +++++++++++++++++ 12 files changed, 14956 insertions(+) create mode 100644 docs/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md create mode 100644 docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md create mode 100644 docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md create mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md create mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md create mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md create mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md create mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md create mode 100644 i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md create mode 100644 versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md create mode 100644 versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md create mode 100644 versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md diff --git a/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md new file mode 100644 index 00000000000..7954c8a3b11 --- /dev/null +++ b/docs/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md @@ -0,0 +1,365 @@ +--- +title: RKE Cluster Configuration Reference +--- + + + + + + + +When Rancher installs Kubernetes, it uses [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) or [RKE2](https://docs.rke2.io/) as the Kubernetes distribution. + +This section covers the configuration options that are available in Rancher for a new or existing RKE Kubernetes cluster. + + +## Overview + +You can configure the Kubernetes options one of two ways: + +- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster. +- [Cluster Config File](#rke-cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. + +The RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the section about the [cluster config file.](#rke-cluster-config-file-reference) + +In [clusters launched by RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md), you can edit any of the remaining options that follow. + +For an example of RKE config file syntax, see the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/). + +The forms in the Rancher UI don't include all advanced options for configuring RKE. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) + +## Editing Clusters with a Form in the Rancher UI + +To edit your cluster, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. Go to the cluster you want to configure and click **⋮ > Edit Config**. + + +## Editing Clusters with YAML + +Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. + +RKE clusters (also called RKE1 clusters) are edited differently than RKE2 and K3s clusters. + +To edit an RKE config file directly from the Rancher UI, + +1. Click **☰ > Cluster Management**. +1. Go to the RKE cluster you want to configure. Click and click **⋮ > Edit Config**. This take you to the RKE configuration form. Note: Because cluster provisioning changed in Rancher 2.6, the **⋮ > Edit as YAML** can be used for configuring RKE2 clusters, but it can't be used for editing RKE1 configuration. +1. In the configuration form, scroll down and click **Edit as YAML**. +1. Edit the RKE options under the `rancher_kubernetes_engine_config` directive. + +## Configuration Options in the Rancher UI + +:::tip + +Some advanced configuration options are not exposed in the Rancher UI forms, but they can be enabled by editing the RKE cluster configuration file in YAML. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) + +::: + +### Kubernetes Version + +The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube). + +For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md). + +### Network Provider + +The [Network Provider](https://kubernetes.io/docs/concepts/cluster-administration/networking/) that the cluster uses. For more details on the different networking providers, please view our [Networking FAQ](../../../faq/container-network-interface-providers.md). + +:::caution + +After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn't allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications. + +::: + +Out of the box, Rancher is compatible with the following network providers: + +- [Canal](https://github.com/projectcalico/canal) +- [Flannel](https://github.com/coreos/flannel#flannel) +- [Calico](https://docs.projectcalico.org/v3.11/introduction/) +- [Weave](https://github.com/weaveworks/weave) + + + +:::note Notes on Weave: + +When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a [Config File](#rke-cluster-config-file-reference) and the [Weave Network Plug-in Options](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options). + +::: + +### Project Network Isolation + +If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication. + +Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. + +### Kubernetes Cloud Providers + +You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider. + +:::note + +If the cloud provider you want to use is not listed as an option, you will need to use the [config file option](#rke-cluster-config-file-reference) to configure the cloud provider. Please reference the [RKE cloud provider documentation](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/) on how to configure the cloud provider. + +::: + +### Private Registries + +The cluster-level private registry configuration is only used for provisioning clusters. + +There are two main ways to set up private registries in Rancher: by setting up the [global default registry](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md) through the **Settings** tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials. + +If your private registry requires credentials, you need to pass the credentials to Rancher by editing the cluster options for each cluster that needs to pull images from the registry. + +The private registry configuration option tells Rancher where to pull the [system images](https://rancher.com/docs/rke/latest/en/config-options/system-images/) or [addon images](https://rancher.com/docs/rke/latest/en/config-options/add-ons/) that will be used in your cluster. + +- **System images** are components needed to maintain the Kubernetes cluster. +- **Add-ons** are used to deploy several cluster components, including network plug-ins, the ingress controller, the DNS provider, or the metrics server. + +For more information on setting up a private registry for components applied during the provisioning of the cluster, see the [RKE documentation on private registries](https://rancher.com/docs/rke/latest/en/config-options/private-registries/). + +Rancher v2.6 introduced the ability to configure [ECR registries for RKE clusters](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup). + +### Authorized Cluster Endpoint + +Authorized Cluster Endpoint (ACE) can be used to directly access the Kubernetes API server, without requiring communication through Rancher. + +:::note + +ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS. + +::: + +ACE must be set up [manually](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#authorized-cluster-endpoint-support-for-rke2-and-k3s-clusters) on RKE2 and K3s clusters. In RKE, ACE is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the `controlplane` role and the default Kubernetes self-signed certificates. + +For more detail on how an authorized cluster endpoint works and why it is used, refer to the [architecture section.](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) + +We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace) + +### Node Pools + +For information on using the Rancher UI to set up node pools in an RKE cluster, refer to [this page.](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md) + +### NGINX Ingress + +If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster. + +### Metrics Server Monitoring + +Option to enable or disable [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/). + +Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal. + +You must have an existing Pod Security Policy configured before you can use this option. + +### Docker Version on Nodes + +Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support. + +If you choose to require a supported Docker version, Rancher will stop pods from running on nodes that don't have a supported Docker version installed. + +For details on which Docker versions were tested with each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) + +### Docker Root Directory + +If the nodes you are adding to the cluster have Docker configured with a non-default Docker Root Directory (default is `/var/lib/docker`), specify the correct Docker Root Directory in this option. + +### Default Pod Security Policy + +If you enable **Pod Security Policy Support**, use this drop-down to choose the pod security policy that's applied to the cluster. + +### Node Port Range + +Option to change the range of ports that can be used for [NodePort services](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). Default is `30000-32767`. + +### Recurring etcd Snapshots + +Option to enable or disable [recurring etcd snapshots](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots). + +### Agent Environment Variables + +Option to set environment variables for [rancher agents](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables. + +### Updating ingress-nginx + +Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`. + +If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment. + +### Cluster Agent Configuration and Fleet Agent Configuration + +You can configure the scheduling fields and resource limits for the Cluster Agent and the cluster's Fleet Agent. You can use these fields to customize tolerations, affinity rules, and resource requirements. Additional tolerations are appended to a list of default tolerations and control plane node taints. If you define custom affinity rules, they override the global default affinity setting. Defining resource requirements sets requests or limits where there previously were none. + +:::note + +With this option, it's possible to override or remove rules that are required for the functioning of the cluster. We strongly recommend against removing or overriding these and any other affinity rules, as this may cause unwanted side effects: + +- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` for `cattle-cluster-agent` +- `cluster-agent-default-affinity` for `cattle-cluster-agent` +- `fleet-agent-default-affinity` for `fleet-agent` + +::: + +If you downgrade Rancher to v2.7.4 or below, your changes will be lost and the agents will re-deploy without your customizations. The Fleet agent will fallback to using its built-in default values when it re-deploys. If the Fleet version doesn't change during the downgrade, the re-deploy won't be immediate. + + +## RKE Cluster Config File Reference + +Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the [options available](https://rancher.com/docs/rke/latest/en/config-options/) in an RKE installation, except for `system_images` configuration. The `system_images` option is not supported when creating a cluster with the Rancher UI or API. + +For the complete reference for configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) + +### Config File Structure in Rancher + +RKE (Rancher Kubernetes Engine) is the tool that Rancher uses to provision Kubernetes clusters. Rancher's cluster config files used to have the same structure as [RKE config files,](https://rancher.com/docs/rke/latest/en/example-yamls/) but the structure changed so that in Rancher, RKE cluster config items are separated from non-RKE config items. Therefore, configuration for your cluster needs to be nested under the `rancher_kubernetes_engine_config` directive in the cluster config file. Cluster config files created with earlier versions of Rancher will need to be updated for this format. An example cluster config file is included below. + +
+ Example Cluster Config File + +```yaml +# +# Cluster Config +# +docker_root_dir: /var/lib/docker +enable_cluster_alerting: false +enable_cluster_monitoring: false +enable_network_policy: false +local_cluster_auth_endpoint: + enabled: true +# +# Rancher Config +# +rancher_kubernetes_engine_config: # Your RKE template config goes here. + addon_job_timeout: 30 + authentication: + strategy: x509 + ignore_docker_version: true +# +# # Currently only nginx ingress provider is supported. +# # To disable ingress controller, set `provider: none` +# # To enable ingress on specific nodes, use the node_selector, eg: +# provider: nginx +# node_selector: +# app: ingress +# + ingress: + provider: nginx + kubernetes_version: v1.15.3-rancher3-1 + monitoring: + provider: metrics-server +# +# If you are using calico on AWS +# +# network: +# plugin: calico +# calico_network_provider: +# cloud_provider: aws +# +# # To specify flannel interface +# +# network: +# plugin: flannel +# flannel_network_provider: +# iface: eth1 +# +# # To specify flannel interface for canal plugin +# +# network: +# plugin: canal +# canal_network_provider: +# iface: eth1 +# + network: + options: + flannel_backend_type: vxlan + plugin: canal +# +# services: +# kube-api: +# service_cluster_ip_range: 10.43.0.0/16 +# kube-controller: +# cluster_cidr: 10.42.0.0/16 +# service_cluster_ip_range: 10.43.0.0/16 +# kubelet: +# cluster_domain: cluster.local +# cluster_dns_server: 10.43.0.10 +# + services: + etcd: + backup_config: + enabled: true + interval_hours: 12 + retention: 6 + safe_timestamp: false + creation: 12h + extra_args: + election-timeout: 5000 + heartbeat-interval: 500 + gid: 0 + retention: 72h + snapshot: false + uid: 0 + kube_api: + always_pull_images: false + pod_security_policy: false + service_node_port_range: 30000-32767 + ssh_agent_auth: false +windows_prefered_cluster: false +``` +
+ +### Default DNS provider + +The table below indicates what DNS provider is deployed by default. See [RKE documentation on DNS provider](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/) for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher. + +| Rancher version | Kubernetes version | Default DNS provider | +|-------------|--------------------|----------------------| +| v2.2.5 and higher | v1.14.0 and higher | CoreDNS | +| v2.2.5 and higher | v1.13.x and lower | kube-dns | +| v2.2.4 and lower | any | kube-dns | + +## Rancher Specific Parameters in YAML + +Besides the RKE config file options, there are also Rancher specific settings that can be configured in the Config File (YAML): + +### docker_root_dir + +See [Docker Root Directory](#docker-root-directory). + +### enable_cluster_monitoring + +Option to enable or disable [Cluster Monitoring](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md). + +### enable_network_policy + +Option to enable or disable Project Network Isolation. + +Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. + +### local_cluster_auth_endpoint + +See [Authorized Cluster Endpoint](#authorized-cluster-endpoint). + +Example: + +```yaml +local_cluster_auth_endpoint: + enabled: true + fqdn: "FQDN" + ca_certs: |- + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- +``` + +### Custom Network Plug-in + +You can add a custom network plug-in by using the [user-defined add-on functionality](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/) of RKE. You define any add-on that you want deployed after the Kubernetes cluster is deployed. + +There are two ways that you can specify an add-on: + +- [In-line Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons) +- [Referencing YAML Files for Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons) + +For an example of how to configure a custom network plug-in by editing the `cluster.yml`, refer to the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example) \ No newline at end of file diff --git a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md new file mode 100644 index 00000000000..35ecd76ead2 --- /dev/null +++ b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md @@ -0,0 +1,513 @@ +--- +title: RKE Hardening Guides +--- + + + + + + + +This document provides prescriptive guidance for how to harden an RKE cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls. + +:::note +This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes. +::: + +This hardening guide is intended to be used for RKE clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +| Rancher Version | CIS Benchmark Version | Kubernetes Version | +|-----------------|-----------------------|------------------------------| +| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 | +| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 | +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 up to v1.26 | + +:::note +- In Benchmark v1.24 and later, check id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` might fail, as `/etc/kubernetes/ssl/kube-ca.pem` is set to 644 by default. +- In Benchmark v1.7, the `--protect-kernel-defaults` (`4.2.6`) parameter isn't required anymore, and was removed by CIS. +::: + +For more details on how to evaluate a hardened RKE cluster against the official CIS benchmark, refer to the RKE self-assessment guides for specific Kubernetes and CIS benchmark versions. + +## Host-level requirements + +### Configure Kernel Runtime Parameters + +The following `sysctl` configuration is recommended for all nodes types in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: + +```ini +vm.overcommit_memory=1 +vm.panic_on_oom=0 +kernel.panic=10 +kernel.panic_on_oops=1 +``` + +Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. + +### Configure `etcd` user and group + +A user account and group for the **etcd** service is required to be set up before installing RKE. + +#### Create `etcd` user and group + +To create the **etcd** user and group run the following console commands. +The commands below use `52034` for **uid** and **gid** for example purposes. +Any valid unused **uid** or **gid** could also be used in lieu of `52034`. + +```bash +groupadd --gid 52034 etcd +useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin +``` + +When deploying RKE through its cluster configuration `config.yml` file, update the `uid` and `gid` of the `etcd` user: + +```yaml +services: + etcd: + gid: 52034 + uid: 52034 +``` + +## Kubernetes runtime requirements + +### Configure `default` Service Account + +#### Set `automountServiceAccountToken` to `false` for `default` service accounts + +Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. +Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. +The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. + +For each namespace including `default` and `kube-system` on a standard RKE install, the `default` service account must include this value: + +```yaml +automountServiceAccountToken: false +``` + +Save the following configuration to a file called `account_update.yaml`. + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default +automountServiceAccountToken: false +``` + +Create a bash script file called `account_update.sh`. +Be sure to `chmod +x account_update.sh` so the script has execute permissions. + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" +done +``` + +Execute this script to apply the `account_update.yaml` configuration to `default` service account in all namespaces. + +### Configure Network Policy + +#### Ensure that all Namespaces have Network Policies defined + +Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints. + +Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a container network interface (CNI) plugin must be enabled. This guide uses [Canal](https://github.com/projectcalico/canal) to provide the policy enforcement. Additional information about CNI providers can be found [here](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/). + +Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a **permissive** example is provided below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following configuration as `default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) about network policies can be found on the Kubernetes site. + +:::caution +This network policy is just an example and is not recommended for production use. +::: + +```yaml +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-allow-all +spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress +``` + +Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to `chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + kubectl apply -f default-allow-all.yaml -n ${namespace} +done +``` + +Execute this script to apply the `default-allow-all.yaml` configuration with the **permissive** `NetworkPolicy` to all namespaces. + +## Known Limitations + +- Rancher **exec shell** and **view logs** for pods are **not** functional in a hardened setup when only a public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. + +## Reference Hardened RKE `cluster.yml` Configuration + +The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened installation of RKE. RKE [documentation](https://rancher.com/docs/rke/latest/en/installation/) provides additional details about the configuration items. This reference `cluster.yml` does not include the required `nodes` directive which will vary depending on your environment. Documentation for node configuration in RKE can be found [here](https://rancher.com/docs/rke/latest/en/config-options/nodes/). + +The example `cluster.yml` configuration file contains an Admission Configuration policy in the `services.kube-api.admission_configuration` field. This [sample](../../psa-restricted-exemptions.md) policy contains the namespace exemptions necessary for an imported RKE cluster to run properly in Rancher, similar to Rancher's pre-defined [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) policy. + +If you prefer to use RKE's default `restricted` policy, then leave the `services.kube-api.admission_configuration` field empty and set `services.pod_security_configuration` to `restricted`. See [the RKE docs](https://rke.docs.rancher.com/config-options/services/pod-security-admission) for more information. + + + + +:::note +If you intend to import an RKE cluster into Rancher, please consult the [documentation](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for how to configure the PSA to exempt Rancher system namespaces. +::: + +```yaml +# If you intend to deploy Kubernetes in an air-gapped environment, +# please consult the documentation on how to configure custom RKE images. +nodes: [] +kubernetes_version: # Define RKE version +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + event_rate_limit: + enabled: true + # Leave `pod_security_configuration` out if you are setting a + # custom policy in `admission_configuration`. Otherwise set + # it to `restricted` to use RKE's pre-defined restricted policy, + # and remove everything inside `admission_configuration` field. + # + # pod_security_configuration: restricted + # + admission_configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: PodSecurity + configuration: + apiVersion: pod-security.admission.config.k8s.io/v1 + kind: PodSecurityConfiguration + defaults: + enforce: "restricted" + enforce-version: "latest" + audit: "restricted" + audit-version: "latest" + warn: "restricted" + warn-version: "latest" + exemptions: + usernames: [] + runtimeClasses: [] + namespaces: [calico-apiserver, + calico-system, + cattle-alerting, + cattle-csp-adapter-system, + cattle-elemental-system, + cattle-epinio-system, + cattle-externalip-system, + cattle-fleet-local-system, + cattle-fleet-system, + cattle-gatekeeper-system, + cattle-global-data, + cattle-global-nt, + cattle-impersonation-system, + cattle-istio, + cattle-istio-system, + cattle-logging, + cattle-logging-system, + cattle-monitoring-system, + cattle-neuvector-system, + cattle-prometheus, + cattle-provisioning-capi-system, + cattle-resources-system, + cattle-sriov-system, + cattle-system, + cattle-ui-plugin-system, + cattle-windows-gmsa-system, + cert-manager, + cis-operator-system, + fleet-default, + ingress-nginx, + istio-system, + kube-node-lease, + kube-public, + kube-system, + longhorn-system, + rancher-alerting-drivers, + security-scan, + tigera-operator] + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + generate_serving_certificate: true +addons: | + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: default-allow-all + spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: default + automountServiceAccountToken: false +``` + + + + +```yaml +# If you intend to deploy Kubernetes in an air-gapped environment, +# please consult the documentation on how to configure custom RKE images. +nodes: [] +kubernetes_version: # Define RKE version +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + protect-kernel-defaults: true + generate_serving_certificate: true +addons: | + # Upstream Kubernetes restricted PSP policy + # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + name: restricted-noroot + spec: + privileged: false + # Required to prevent escalations to root. + allowPrivilegeEscalation: false + requiredDropCapabilities: + - ALL + # Allow core volume types. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. + - 'csi' + - 'persistentVolumeClaim' + - 'ephemeral' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # Require the container to run without root privileges. + rule: 'MustRunAsNonRoot' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted-noroot + rules: + - apiGroups: + - extensions + resourceNames: + - restricted-noroot + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted-noroot + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted-noroot + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: default-allow-all + spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: default + automountServiceAccountToken: false +``` + + + + +## Reference Hardened RKE Cluster Template Configuration + +The reference RKE cluster template provides the minimum required configuration to achieve a hardened installation of Kubernetes. RKE templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher [documentation](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) for additional information about installing RKE and its template details. + + + + +```yaml +# +# Cluster Config +# +default_pod_security_admission_configuration_template_name: rancher-restricted +enable_network_policy: true +local_cluster_auth_endpoint: + enabled: true +name: # Define cluster name + +# +# Rancher Config +# +rancher_kubernetes_engine_config: + addon_job_timeout: 45 + authentication: + strategy: x509|webhook + kubernetes_version: # Define RKE version + services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: false + secrets_encryption_config: + enabled: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + generate_serving_certificate: true + scheduler: + extra_args: + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` + + + + +```yaml +# +# Cluster Config +# +default_pod_security_policy_template_id: restricted-noroot +enable_network_policy: true +local_cluster_auth_endpoint: + enabled: true +name: # Define cluster name + +# +# Rancher Config +# +rancher_kubernetes_engine_config: + addon_job_timeout: 45 + authentication: + strategy: x509|webhook + kubernetes_version: # Define RKE version + services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + secrets_encryption_config: + enabled: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + protect-kernel-defaults: true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + generate_serving_certificate: true + scheduler: + extra_args: + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` + + + + +## Conclusion + +If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster. \ No newline at end of file diff --git a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md new file mode 100644 index 00000000000..e8bf71f7c78 --- /dev/null +++ b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md @@ -0,0 +1,2864 @@ +--- +title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 +--- + + + + + + + +This document is a companion to the [RKE Hardening Guide](rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark. + + +This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes: + +| Rancher Version | CIS Benchmark Version | Kubernetes Version | +|-----------------|-----------------------|--------------------| +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | + +This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`. + +This document is for Rancher operators, security teams, auditors and decision makers. + +For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). + +## Testing Methodology + +Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. + +Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. + +:::note + +This guide only covers `automated` (previously called `scored`) tests. + +::: + +### Controls + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a +``` + +**Expected Result**: + +```console +'permissions' is present +``` + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit:** + +```bash +stat -c %a /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +**Audit:** + +```bash +stat -c %U:%G /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'etcd:etcd' is present +``` + +**Returned Value**: + +```console +etcd:etcd +``` + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit Script:** `check_files_owner_in_dir.sh` + +```bash +#!/usr/bin/env bash + +# This script is used to ensure the owner is set to root:root for +# the given directory and all the files in it +# +# inputs: +# $1 = /full/path/to/directory +# +# outputs: +# true/false + +INPUT_DIR=$1 + +if [[ "${INPUT_DIR}" == "" ]]; then + echo "false" + exit +fi + +if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then + echo "false" + exit +fi + +statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) +while read -r statInfoLine; do + f=$(echo ${statInfoLine} | cut -d' ' -f1) + p=$(echo ${statInfoLine} | cut -d' ' -f2) + + if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then + if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "root:root" ]]; then + echo "false" + exit + fi + fi +done <<< "${statInfoLines}" + + +echo "true" +exit + +``` + +**Audit Execution:** + +```bash +./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= +When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.17 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi +``` + +**Expected Result**: + +```console +'provider' is present +``` + +### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true +Cluster provisioned by RKE handles certificate rotation directly through RKE. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file= +--key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--cert-file' is present AND '--key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameters. +--peer-client-file= +--peer-key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameter. +--trusted-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--trusted-ca-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of bootstrap tokens. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Automated) + + +**Result:** pass + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-policy-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +All configuration is passed in as arguments at container run time. + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. + All configuration is passed in as arguments at container run time. + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + + +**Result:** fail + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 600 + +**Audit:** + +```bash +stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root + +**Audit:** + +```bash +stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 600 /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--event-qps' is greater or equal to 0 OR '--event-qps' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** warn + +**Remediation:** +Decide on an appropriate level for this parameter and set it, +either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--pod-max-pids' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** pass + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +**Audit Script:** `check_for_default_sa.sh` + +```bash +#!/bin/bash + +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + +count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) +if [[ ${count_sa} -gt 0 ]]; then + echo "false" + exit +fi + +for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") +do + for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') + do + read kind name <<<$(IFS=","; echo $result) + resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) + if [[ ${resource_count} -gt 0 ]]; then + echo "false" + exit + fi + done +done + + +echo "true" + +``` + +**Audit Execution:** + +```bash +./check_for_default_sa.sh +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the proxy sub-resource of node objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the token sub-resource of serviceaccount objects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Manual) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md new file mode 100644 index 00000000000..043a6b43cc9 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md @@ -0,0 +1,357 @@ +--- +title: RKE 集群配置参考 +--- + + + +Rancher 安装 Kubernetes 时,它使用 [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 或 [RKE2](https://docs.rke2.io/) 作为 Kubernetes 发行版。 + +本文介绍 Rancher 中可用于新的或现有的 RKE Kubernetes 集群的配置选项。 + + +## 概述 + +你可以通过以下两种方式之一来配置 Kubernetes 选项: + +- [Rancher UI](#rancher-ui-中的配置选项):使用 Rancher UI 来选择设置 Kubernetes 集群时常用的自定义选项。 +- [集群配置文件](#rke-集群配置文件参考):高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你使用 YAML 来指定 RKE 安装中可用的任何选项(除了 system_images 配置)。 + +RKE 集群配置选项嵌套在 `rancher_kubernetes_engine_config` 参数下。有关详细信息,请参阅[集群配置文件](#rke-集群配置文件参考)。 + +在 [RKE 启动的集群](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md)中,你可以编辑任何后续剩余的选项。 + +有关 RKE 配置文件语法的示例,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/example-yamls/)。 + +Rancher UI 中的表单不包括配置 RKE 的所有高级选项。有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 + +## 在 Rancher UI 中使用表单编辑集群 + +要编辑你的集群: + +1. 在左上角,单击 **☰ > 集群管理**。 +1. 转到要配置的集群,然后单击 **⋮ > 编辑配置**。 + + +## 使用 YAML 编辑集群 + +高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你使用 YAML 来指定 RKE 安装中可用的任何选项(除了 system_images 配置)。 + +RKE 集群(也称为 RKE1 集群)的编辑方式与 RKE2 和 K3s 集群不同。 + +要直接从 Rancher UI 编辑 RKE 配置文件: + +1. 点击 **☰ > 集群管理**。 +1. 转到要配置的 RKE 集群。单击并单击 **⋮ > 编辑配置**。你将会转到 RKE 配置表单。请注意,由于集群配置在 Rancher 2.6 中发生了变更,**⋮ > 以 YAML 文件编辑**可用于配置 RKE2 集群,但不能用于编辑 RKE1 配置。 +1. 在配置表单中,向下滚动并单击**以 YAML 文件编辑**。 +1. 编辑 `rancher_kubernetes_engine_config` 参数下的 RKE 选项。 + +## Rancher UI 中的配置选项 + +:::tip + +一些高级配置选项没有在 Rancher UI 表单中开放,但你可以通过在 YAML 中编辑 RKE 集群配置文件来启用这些选项。有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 + +::: + +### Kubernetes 版本 + +这指的是集群节点上安装的 Kubernetes 版本。Rancher 基于 [hyperkube](https://github.com/rancher/hyperkube) 打包了自己的 Kubernetes 版本。 + +有关更多详细信息,请参阅[升级 Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)。 + +### 网络提供商 + +这指的是集群使用的[网络提供商](https://kubernetes.io/docs/concepts/cluster-administration/networking/)。有关不同网络提供商的更多详细信息,请查看我们的[网络常见问题解答](../../../faq/container-network-interface-providers.md)。 + +:::caution + +启动集群后,你无法更改网络提供商。由于 Kubernetes 不允许在网络提供商之间切换,因此,请谨慎选择要使用的网络提供商。使用网络提供商创建集群后,如果你需要更改网络提供商,你将需要拆除整个集群以及其中的所有应用。 + +::: + +Rancher 与以下开箱即用的网络提供商兼容: + +- [Canal](https://github.com/projectcalico/canal) +- [Flannel](https://github.com/coreos/flannel#flannel) +- [Calico](https://docs.projectcalico.org/v3.11/introduction/) +- [Weave](https://github.com/weaveworks/weave) + +:::note Weave 注意事项: + +选择 Weave 作为网络提供商时,Rancher 将通过生成随机密码来自动启用加密。如果你想手动指定密码,请参阅使用[配置文件](#rke-集群配置文件参考)和 [Weave 网络插件选项](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options)来配置集群。 + +::: + +### 项目网络隔离 + +如果你的网络提供商允许项目网络隔离,你可以选择启用或禁用项目间的通信。 + +如果你使用支持执行 Kubernetes 网络策略的 RKE 网络插件(例如 Canal 或 Cisco ACI 插件),则可以使用项目网络隔离。 + +### Kubernetes 云提供商 + +你可以配置 [Kubernetes 云提供商](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md)。如果你想在 Kubernetes 中使用动态配置的[卷和存储](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md),你通常需要选择特定的云提供商。例如,如果你想使用 Amazon EBS,则需要选择 `aws` 云提供商。 + +:::note + +如果你要使用的云提供商未作为选项列出,你需要使用[配置文件选项](#rke-集群配置文件参考)来配置云提供商。请参考 [RKE 云提供商文档](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/)来了解如何配置云提供商。 + +::: + +### 私有镜像仓库 + +集群级别的私有镜像仓库配置仅能用于配置集群。 + +在 Rancher 中设置私有镜像仓库的主要方法有两种:通过[全局默认镜像仓库](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md)中的**设置**选项卡设置全局默认镜像仓库,以及在集群级别设置的高级选项中设置私有镜像仓库。全局默认镜像仓库可以用于离线设置,不需要凭证的镜像仓库。而集群级私有镜像仓库用于所有需要凭证的私有镜像仓库。 + +如果你的私有镜像仓库需要凭证,为了将凭证传递给 Rancher,你需要编辑每个需要从仓库中拉取镜像的集群的集群选项。 + +私有镜像仓库的配置选项能让 Rancher 知道要从哪里拉取用于集群的[系统镜像](https://rancher.com/docs/rke/latest/en/config-options/system-images/)或[附加组件镜像](https://rancher.com/docs/rke/latest/en/config-options/add-ons/)。 + +- **系统镜像**是维护 Kubernetes 集群所需的组件。 +- **附加组件**用于部署多个集群组件,包括网络插件、ingress controller、DNS 提供商或 metrics server。 + +有关为集群配置期间应用的组件设置私有镜像仓库的更多信息,请参阅[私有镜像仓库的 RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/private-registries/)。 + +Rancher v2.6 引入了[为 RKE 集群配置 ECR 镜像仓库](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup)的功能。 + +### 授权集群端点 + +授权集群端点(ACE)可用于直接访问 Kubernetes API server,而无需通过 Rancher 进行通信。 + +:::note + +授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即只适用于 Rancher [使用 RKE](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#配置-kubernetes-集群的工具) 来配置的集群。它不适用于托管在 Kubernetes 提供商中的集群,例如 Amazon 的 EKS。 + +::: + +在 Rancher 启动的 Kubernetes 集群中,它默认启用,使用具有 `controlplane` 角色的节点的 IP 和默认的 Kubernetes 自签名证书。 + +有关授权集群端点的工作原理以及使用的原因,请参阅[架构介绍](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)。 + +我们建议使用具有授权集群端点的负载均衡器。有关详细信息,请参阅[推荐的架构](../../rancher-manager-architecture/architecture-recommendations.md#授权集群端点架构)。 + +### 节点池 + +有关使用 Rancher UI 在 RKE 集群中设置节点池的信息,请参阅[此页面](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md)。 + +### NGINX Ingress + +如果你想使用高可用性配置来发布应用,并且你使用没有原生负载均衡功能的云提供商来托管主机,请启用此选项以在集群中使用 NGINX Ingress。 + +### Metrics Server 监控 + +这是启用或禁用 [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/) 的选项。 + +每个能够使用 RKE 启动集群的云提供商都可以收集指标并监控你的集群节点。如果启用此选项,你可以从你的云提供商门户查看你的节点指标。 + +### 节点上的 Docker 版本 + +表示是否允许节点运行 Rancher 不正式支持的 Docker 版本。 + +如果你选择使用支持的 Docker 版本,Rancher 会禁止 pod 运行在安装了不支持的 Docker 版本的节点上。 + +如需了解各个 Rancher 版本通过了哪些 Docker 版本测试,请参见[支持和维护条款](https://rancher.com/support-maintenance-terms/)。 + +### Docker 根目录 + +如果要添加到集群的节点为 Docker 配置了非默认 Docker 根目录(默认为 `/var/lib/docker`),请在此选项中指定正确的 Docker 根目录。 + +### 默认 Pod 安全策略 + +如果你启用了 **Pod 安全策略支持**,请使用此下拉菜单选择应用于集群的 pod 安全策略。 + +### 节点端口范围 + +更改可用于 [NodePort 服务](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)的端口范围的选项。默认为 `30000-32767`。 + +### 定期 etcd 快照 + +启用或禁用[定期 etcd 快照](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots)的选项。 + +### Agent 环境变量 + +为 [rancher agent](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md) 设置环境变量的选项。你可以使用键值对设置环境变量。如果 Rancher Agent 需要使用代理与 Rancher Server 通信,则可以使用 Agent 环境变量设置 `HTTP_PROXY`,`HTTPS_PROXY` 和 `NO_PROXY` 环境变量。 + +### 更新 ingress-nginx + +使用 Kubernetes 1.16 之前版本创建的集群将具有 `OnDelete`的 `ingress-nginx` `updateStrategy`。使用 Kubernetes 1.16 或更高版本创建的集群将具有 `RollingUpdate`。 + +如果 `ingress-nginx` 的 `updateStrategy` 是 `OnDelete`,则需要删除这些 pod 以获得 deployment 正确的版本。 + +### Cluster Agent 配置和 Fleet Agent 配置 + +你可以为 Cluster Agent 和集群的 Fleet Agent 配置调度字段和资源限制。你可以使用这些字段来自定义容忍度、亲和性规则和资源要求。其他容忍度会被尾附到默认容忍度和 Control Plane 节点污点的列表中。如果你定义了自定义亲和性规则,它们将覆盖全局默认亲和性设置。定义资源要求会在以前没有的地方设置请求或限制。 + +:::note + +有了这个选项,你可以覆盖或删除运行集群所需的规则。我们强烈建议你不要删除或覆盖这些规则和其他亲和性规则,因为这可能会导致不必要的影响: + +- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` 用于 `cattle-cluster-agent` +- `cluster-agent-default-affinity` 用于 `cattle-cluster-agent` +- `fleet-agent-default-affinity` 用于 `fleet-agent` + +::: + +如果将 Rancher 降级到 v2.7.4 或更低版本,你的更改将丢失,而且 Agent 将在没有你的自定义设置的情况下重新部署。重新部署时,Fleet Agent 将回退到使用内置默认值。如果降级期间 Fleet 版本没有更改,则不会立即重新部署。 + + +## RKE 集群配置文件参考 + +高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你在 RKE 安装中设置任何[可用选项](https://rancher.com/docs/rke/latest/en/config-options/)(`system_images` 配置除外)。使用 Rancher UI 或 API 创建集群时,不支持 `system_images` 选项。 + +有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 + +### Rancher 中的配置文件结构 + +RKE(Rancher Kubernetes Engine)是 Rancher 用来配置 Kubernetes 集群的工具。过去,Rancher 的集群配置文件与 [RKE 配置文件](https://rancher.com/docs/rke/latest/en/example-yamls/)的结构是一致的。但由于 Rancher 文件结构发生了变化,因此在 Rancher 中,RKE 集群配置项与非 RKE 配置项是分开的。所以,你的集群配置需要嵌套在集群配置文件中的 `rancher_kubernetes_engine_config` 参数下。使用早期版本的 Rancher 创建的集群配置文件需要针对这种格式进行更新。以下是一个集群配置文件示例: + +
+ 集群配置文件示例 + +```yaml +# +# Cluster Config +# +docker_root_dir: /var/lib/docker +enable_cluster_alerting: false +enable_cluster_monitoring: false +enable_network_policy: false +local_cluster_auth_endpoint: + enabled: true +# +# Rancher Config +# +rancher_kubernetes_engine_config: # Your RKE template config goes here. + addon_job_timeout: 30 + authentication: + strategy: x509 + ignore_docker_version: true +# +# # 目前仅支持 Nginx ingress provider +# # 要禁用 Ingress controller,设置 `provider: none` +# # 要在指定节点上禁用 Ingress,使用 node_selector,例如: +# provider: nginx +# node_selector: +# app: ingress +# + ingress: + provider: nginx + kubernetes_version: v1.15.3-rancher3-1 + monitoring: + provider: metrics-server +# +# If you are using calico on AWS +# +# network: +# plugin: calico +# calico_network_provider: +# cloud_provider: aws +# +# # To specify flannel interface +# +# network: +# plugin: flannel +# flannel_network_provider: +# iface: eth1 +# +# # To specify flannel interface for canal plugin +# +# network: +# plugin: canal +# canal_network_provider: +# iface: eth1 +# + network: + options: + flannel_backend_type: vxlan + plugin: canal +# +# services: +# kube-api: +# service_cluster_ip_range: 10.43.0.0/16 +# kube-controller: +# cluster_cidr: 10.42.0.0/16 +# service_cluster_ip_range: 10.43.0.0/16 +# kubelet: +# cluster_domain: cluster.local +# cluster_dns_server: 10.43.0.10 +# + services: + etcd: + backup_config: + enabled: true + interval_hours: 12 + retention: 6 + safe_timestamp: false + creation: 12h + extra_args: + election-timeout: 5000 + heartbeat-interval: 500 + gid: 0 + retention: 72h + snapshot: false + uid: 0 + kube_api: + always_pull_images: false + pod_security_policy: false + service_node_port_range: 30000-32767 + ssh_agent_auth: false +windows_prefered_cluster: false +``` +
+ +### 默认 DNS 提供商 + +下表显示了默认部署的 DNS 提供商。有关如何配置不同 DNS 提供商的更多信息,请参阅 [DNS 提供商相关的 RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/)。CoreDNS 只能在 Kubernetes v1.12.0 及更高版本上使用。 + +| Rancher 版本 | Kubernetes 版本 | 默认 DNS 提供商 | +|-------------|--------------------|----------------------| +| v2.2.5 及更高版本 | v1.14.0 及更高版本 | CoreDNS | +| v2.2.5 及更高版本 | v1.13.x 及更低版本 | kube-dns | +| v2.2.4 及更低版本 | 任意 | kube-dns | + +## YAML 中的 Rancher 特定参数 + +除了 RKE 配置文件选项外,还有可以在配置文件 (YAML) 中配置的 Rancher 特定设置如下。 + +### docker_root_dir + +请参阅 [Docker 根目录](#docker-根目录)。 + +### enable_cluster_monitoring + +启用或禁用[集群监控](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md)的选项。 + +### enable_network_policy + +启用或禁用项目网络隔离的选项。 + +如果你使用支持执行 Kubernetes 网络策略的 RKE 网络插件(例如 Canal 或 Cisco ACI 插件),则可以使用项目网络隔离。 + +### local_cluster_auth_endpoint + +请参阅[授权集群端点](#授权集群端点)。 + +示例: + +```yaml +local_cluster_auth_endpoint: + enabled: true + fqdn: "FQDN" + ca_certs: |- + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- +``` + +### 自定义网络插件 + +你可以使用 RKE 的[用户定义的附加组件功能](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/)来添加自定义网络插件。部署 Kubernetes 集群之后,你可以定义要部署的任何附加组件。 + +有两种方法可以指定附加组件: + +- [内嵌附加组件](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons) +- [为附加组件引用 YAML 文件](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons) + +有关如何通过编辑 `cluster.yml` 来配置自定义网络插件的示例,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md new file mode 100644 index 00000000000..62a27bd3e86 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md @@ -0,0 +1,516 @@ +--- +title: RKE 加固指南 +--- + + + + + + + +本文档提供了针对生产环境的 RKE 集群进行加固的具体指导,以便在使用 Rancher 部署之前进行配置。它概述了满足信息安全中心(Center for Information Security, CIS)Kubernetes benchmark controls 所需的配置和控制。 + +:::note +这份加固指南描述了如何确保你集群中的节点安全。我们建议你在安装 Kubernetes 之前遵循本指南。 +::: + +此加固指南适用于 RKE 集群,并与以下版本的 CIS Kubernetes Benchmark、Kubernetes 和 Rancher 相关联: + +| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | +|-----------------|-----------------------|------------------------------| +| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 | +| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 | +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 至 v1.26 | + +:::note +- 在 Benchmark v1.24 及更高版本中,检查 id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` 可能会失败,因为 `/etc/kubernetes/ssl/kube-ca.pem` 默认设置为 644。 +- 在 Benchmark v1.7 中,不再需要 `--protect-kernel-defaults` (`4.2.6`) 参数,并已被 CIS 删除。 +::: + +有关如何评估加固的 RKE 集群与官方 CIS benchmark 的更多细节,请参考特定 Kubernetes 和 CIS benchmark 版本的 RKE 自我评估指南。 + +## 主机级别要求 + +### 配置 Kernel 运行时参数 + +建议对群集中的所有节点类型使用以下 `sysctl` 配置。在 `/etc/sysctl.d/90-kubelet.conf` 中设置以下参数: + +```ini +vm.overcommit_memory=1 +vm.panic_on_oom=0 +kernel.panic=10 +kernel.panic_on_oops=1 +``` + +运行 `sysctl -p /etc/sysctl.d/90-kubelet.conf` 以启用设置。 + +### 配置 `etcd` 用户和组 + +在安装 RKE 之前,需要设置 **etcd** 服务的用户帐户和组。 + +#### 创建 `etcd` 用户和组 + +要创建 **etcd** 用户和组,请运行以下控制台命令。 +下面的命令示例中使用 `52034` 作为 **uid** 和 **gid** 。 +任何有效且未使用的 **uid** 或 **gid** 都可以代替 `52034`。 + +```bash +groupadd --gid 52034 etcd +useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin +``` + +在通过集群配置文件 `config.yml` 部署RKE时,请更新 `etcd` 用户的 `uid` 和 `gid`: + +```yaml +services: + etcd: + gid: 52034 + uid: 52034 +``` + +## Kubernetes 运行时要求 + +### 配置 `default` Service Account + +#### 设置 `automountServiceAccountToken` 为 `false` 用于 `default` service accounts + +Kubernetes 提供了一个 default service account,供集群工作负载使用,其中没有为 pod 分配特定的 service account。 +如果需要从 pod 访问 Kubernetes API,则应为该 pod 创建特定的 service account,并向该 service account 授予权限。 +应配置 default service account,使其不提供 service account 令牌,并且不应具有任何明确的权限分配。 + +对于标准 RKE 安装上的每个命名空间(包括 `default` 和 `kube-system`),`default` service account 必须包含以下值: + +```yaml +automountServiceAccountToken: false +``` + +将以下配置保存到名为 `account_update.yaml` 的文件中。 + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default +automountServiceAccountToken: false +``` + +创建一个名为 `account_update.yaml` 的 bash 脚本文件。 +确保执行 `chmod +x account_update.sh` 命令,以赋予脚本执行权限。 + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" +done +``` + +执行此脚本将 `account_update.yaml` 配置应用到所有命名空间中的 `default` service account。 + +### 配置网络策略 + +#### 确保所有命名空间都定义了网络策略 + +在同一个 Kubernetes 集群上运行不同的应用程序会带来风险,即某个受感染的应用程序可能会攻击相邻的应用程序。为确保容器只与其预期通信的容器进行通信,网络分段至关重要。网络策略规定了哪些 Pod 可以互相通信,以及与其他网络终端通信的方式。 + +网络策略是命名空间范围的。当在特定命名空间引入网络策略时,所有未被策略允许的流量将被拒绝。然而,如果在命名空间中没有网络策略,那么所有流量将被允许进入和离开该命名空间中的 Pod。要强制执行网络策略,必须启用容器网络接口(container network interface, CNI)插件。本指南使用 [Canal](https://github.com/projectcalico/canal) 来提供策略执行。有关 CNI 提供程序的其他信息可以在[这里](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/)找到。 + +一旦在集群上启用了 CNI 提供程序,就可以应用默认的网络策略。下面提供了一个 **permissive** 的示例供参考。如果你希望允许匹配某个命名空间中所有 Pod 的所有入站和出站流量(即使添加了策略导致某些 Pod 被视为”隔离”),你可以创建一个明确允许该命名空间中所有流量的策略。请将以下配置保存为 `default-allow-all.yaml`。有关网络策略的其他[文档](https://kubernetes.io/docs/concepts/services-networking/network-policies/)可以在 Kubernetes 站点上找到。 + +:::caution +此网络策略只是一个示例,不建议用于生产用途。 +::: + +```yaml +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-allow-all +spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress +``` + +创建一个名为 `apply_networkPolicy_to_all_ns.sh`的 Bash 脚本文件。 + +确保运行 `chmod +x apply_networkPolicy_to_all_ns.sh` 命令,以赋予脚本执行权限。 + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + kubectl apply -f default-allow-all.yaml -n ${namespace} +done +``` + +执行此脚本以将 `default-allow-all.yaml` 配置和 **permissive** 的 `NetworkPolicy` 应用于所有命名空间。 + +## 已知限制 + +- 当注册自定义节点仅提供公共 IP 时,Rancher **exec shell** 和 **查看 pod 日志** 在加固设置中**不起作用**。 此功能需要在注册自定义节点时提供私有 IP。 + +## 加固的 RKE `cluster.yml` 配置参考 + +参考的 `cluster.yml` 文件是由 RKE CLI 使用的,它提供了实现 RKE 加固安装所需的配置。 +RKE [文档](https://rancher.com/docs/rke/latest/en/installation/)提供了有关配置项的更多详细信息。这里参考的 `cluster.yml` 不包括必需的 `nodes` 指令,因为它取决于你的环境。在 RKE 中有关节点配置的文档可以在[这里](https://rancher.com/docs/rke/latest/en/config-options/nodes/)找到。 + +示例 `cluster.yml` 配置文件中包含了一个 Admission Configuration 策略,在 `services.kube-api.admission_configuration` 字段中指定。这个[示例](../../psa-restricted-exemptions.md)策略包含了命名空间的豁免规则,这对于在Rancher中正确运行导入的RKE集群非常必要,类似于Rancher预定义的 [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) 策略。 + +如果你希望使用 RKE 的默认 `restricted` 策略,则将 `services.kube-api.admission_configuration` 字段留空,并将 `services.pod_security_configuration` 设置为 `restricted`。你可以在 [RKE 文档](https://rke.docs.rancher.com/config-options/services/pod-security-admission)中找到更多信息。 + + + + +:::note +如果你打算将一个 RKE 集群导入到 Rancher 中,请参考此[文档](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md)以了解如何配置 PSA 以豁免 Rancher 系统命名空间。 +::: + +```yaml +# 如果你打算在离线环境部署 Kubernetes, +# 请查阅文档以了解如何配置自定义的 RKE 镜像。 +nodes: [] +kubernetes_version: # 定义 RKE 版本 +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + event_rate_limit: + enabled: true + # 如果你在 `admission_configuration` 中设置了自定义策略, + # 请将 `pod_security_configuration` 字段留空。 + # 否则,将其设置为 `restricted` 以使用 RKE 预定义的受限策略, + # 并删除 `admission_configuration` 字段中的所有内容。 + # + # pod_security_configuration: restricted + # + admission_configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: PodSecurity + configuration: + apiVersion: pod-security.admission.config.k8s.io/v1 + kind: PodSecurityConfiguration + defaults: + enforce: "restricted" + enforce-version: "latest" + audit: "restricted" + audit-version: "latest" + warn: "restricted" + warn-version: "latest" + exemptions: + usernames: [] + runtimeClasses: [] + namespaces: [calico-apiserver, + calico-system, + cattle-alerting, + cattle-csp-adapter-system, + cattle-elemental-system, + cattle-epinio-system, + cattle-externalip-system, + cattle-fleet-local-system, + cattle-fleet-system, + cattle-gatekeeper-system, + cattle-global-data, + cattle-global-nt, + cattle-impersonation-system, + cattle-istio, + cattle-istio-system, + cattle-logging, + cattle-logging-system, + cattle-monitoring-system, + cattle-neuvector-system, + cattle-prometheus, + cattle-provisioning-capi-system, + cattle-resources-system, + cattle-sriov-system, + cattle-system, + cattle-ui-plugin-system, + cattle-windows-gmsa-system, + cert-manager, + cis-operator-system, + fleet-default, + ingress-nginx, + istio-system, + kube-node-lease, + kube-public, + kube-system, + longhorn-system, + rancher-alerting-drivers, + security-scan, + tigera-operator] + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + generate_serving_certificate: true +addons: | + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: default-allow-all + spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: default + automountServiceAccountToken: false +``` + + + + +```yaml +# 如果你打算在离线环境部署 Kubernetes, +# 请查阅文档以了解如何配置自定义的 RKE 镜像。 +nodes: [] +kubernetes_version: # 定义 RKE 版本 +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + protect-kernel-defaults: true + generate_serving_certificate: true +addons: | + # Upstream Kubernetes restricted PSP policy + # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + name: restricted-noroot + spec: + privileged: false + # Required to prevent escalations to root. + allowPrivilegeEscalation: false + requiredDropCapabilities: + - ALL + # Allow core volume types. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. + - 'csi' + - 'persistentVolumeClaim' + - 'ephemeral' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # Require the container to run without root privileges. + rule: 'MustRunAsNonRoot' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted-noroot + rules: + - apiGroups: + - extensions + resourceNames: + - restricted-noroot + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted-noroot + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted-noroot + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: default-allow-all + spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: default + automountServiceAccountToken: false +``` + + + + +## 加固后的 RKE 集群模板配置参考 + +参考的 RKE 集群模板提供了实现 Kubernetes 加固安装所需的最低配置。RKE 模板用于提供 Kubernetes 并定义 Rancher 设置。有关安装 RKE 及其模板详情的其他信息,请参考 Rancher [文档](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) 。 + + + + +```yaml +# +# 集群配置 +# +default_pod_security_admission_configuration_template_name: rancher-restricted +enable_network_policy: true +local_cluster_auth_endpoint: + enabled: true +name: # 定义集群名称 + +# +# Rancher 配置 +# +rancher_kubernetes_engine_config: + addon_job_timeout: 45 + authentication: + strategy: x509|webhook + kubernetes_version: # 定义 RKE 版本 + services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: false + secrets_encryption_config: + enabled: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + generate_serving_certificate: true + scheduler: + extra_args: + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` + + + + +```yaml +# +# 集群配置 +# +default_pod_security_policy_template_id: restricted-noroot +enable_network_policy: true +local_cluster_auth_endpoint: + enabled: true +name: # 定义集群名称 + +# +# Rancher 配置 +# +rancher_kubernetes_engine_config: + addon_job_timeout: 45 + authentication: + strategy: x509|webhook + kubernetes_version: # 定义 RKE 版本 + services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + secrets_encryption_config: + enabled: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + protect-kernel-defaults: true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + generate_serving_certificate: true + scheduler: + extra_args: + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` + + + + +## 结论 + +如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md new file mode 100644 index 00000000000..1a3eb88ed98 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md @@ -0,0 +1,2863 @@ +--- +title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 +--- + + + + + + + +本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。 + +本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes: + +| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | +|-----------------|-----------------------|--------------------| +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | + +本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。 + +本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。 + +有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。 + +## 测试方法 + +Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。 + +在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。 + +:::note + +本指南仅涵盖 `automated`(之前称为 `scored`)测试。 + +::: + +### Controls + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a +``` + +**Expected Result**: + +```console +'permissions' is present +``` + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit:** + +```bash +stat -c %a /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +**Audit:** + +```bash +stat -c %U:%G /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'etcd:etcd' is present +``` + +**Returned Value**: + +```console +etcd:etcd +``` + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit Script:** `check_files_owner_in_dir.sh` + +```bash +#!/usr/bin/env bash + +# This script is used to ensure the owner is set to root:root for +# the given directory and all the files in it +# +# inputs: +# $1 = /full/path/to/directory +# +# outputs: +# true/false + +INPUT_DIR=$1 + +if [[ "${INPUT_DIR}" == "" ]]; then + echo "false" + exit +fi + +if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then + echo "false" + exit +fi + +statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) +while read -r statInfoLine; do + f=$(echo ${statInfoLine} | cut -d' ' -f1) + p=$(echo ${statInfoLine} | cut -d' ' -f2) + + if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then + if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "root:root" ]]; then + echo "false" + exit + fi + fi +done <<< "${statInfoLines}" + + +echo "true" +exit + +``` + +**Audit Execution:** + +```bash +./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= +When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.17 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi +``` + +**Expected Result**: + +```console +'provider' is present +``` + +### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true +Cluster provisioned by RKE handles certificate rotation directly through RKE. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file= +--key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--cert-file' is present AND '--key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameters. +--peer-client-file= +--peer-key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameter. +--trusted-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--trusted-ca-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of bootstrap tokens. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Automated) + + +**Result:** pass + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-policy-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +All configuration is passed in as arguments at container run time. + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. + All configuration is passed in as arguments at container run time. + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + + +**Result:** fail + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 600 + +**Audit:** + +```bash +stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root + +**Audit:** + +```bash +stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 600 /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--event-qps' is greater or equal to 0 OR '--event-qps' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** warn + +**Remediation:** +Decide on an appropriate level for this parameter and set it, +either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--pod-max-pids' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** pass + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +**Audit Script:** `check_for_default_sa.sh` + +```bash +#!/bin/bash + +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + +count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) +if [[ ${count_sa} -gt 0 ]]; then + echo "false" + exit +fi + +for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") +do + for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') + do + read kind name <<<$(IFS=","; echo $result) + resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) + if [[ ${resource_count} -gt 0 ]]; then + echo "false" + exit + fi + done +done + + +echo "true" + +``` + +**Audit Execution:** + +```bash +./check_for_default_sa.sh +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the proxy sub-resource of node objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the token sub-resource of serviceaccount objects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Manual) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md new file mode 100644 index 00000000000..043a6b43cc9 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md @@ -0,0 +1,357 @@ +--- +title: RKE 集群配置参考 +--- + + + +Rancher 安装 Kubernetes 时,它使用 [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 或 [RKE2](https://docs.rke2.io/) 作为 Kubernetes 发行版。 + +本文介绍 Rancher 中可用于新的或现有的 RKE Kubernetes 集群的配置选项。 + + +## 概述 + +你可以通过以下两种方式之一来配置 Kubernetes 选项: + +- [Rancher UI](#rancher-ui-中的配置选项):使用 Rancher UI 来选择设置 Kubernetes 集群时常用的自定义选项。 +- [集群配置文件](#rke-集群配置文件参考):高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你使用 YAML 来指定 RKE 安装中可用的任何选项(除了 system_images 配置)。 + +RKE 集群配置选项嵌套在 `rancher_kubernetes_engine_config` 参数下。有关详细信息,请参阅[集群配置文件](#rke-集群配置文件参考)。 + +在 [RKE 启动的集群](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md)中,你可以编辑任何后续剩余的选项。 + +有关 RKE 配置文件语法的示例,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/example-yamls/)。 + +Rancher UI 中的表单不包括配置 RKE 的所有高级选项。有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 + +## 在 Rancher UI 中使用表单编辑集群 + +要编辑你的集群: + +1. 在左上角,单击 **☰ > 集群管理**。 +1. 转到要配置的集群,然后单击 **⋮ > 编辑配置**。 + + +## 使用 YAML 编辑集群 + +高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你使用 YAML 来指定 RKE 安装中可用的任何选项(除了 system_images 配置)。 + +RKE 集群(也称为 RKE1 集群)的编辑方式与 RKE2 和 K3s 集群不同。 + +要直接从 Rancher UI 编辑 RKE 配置文件: + +1. 点击 **☰ > 集群管理**。 +1. 转到要配置的 RKE 集群。单击并单击 **⋮ > 编辑配置**。你将会转到 RKE 配置表单。请注意,由于集群配置在 Rancher 2.6 中发生了变更,**⋮ > 以 YAML 文件编辑**可用于配置 RKE2 集群,但不能用于编辑 RKE1 配置。 +1. 在配置表单中,向下滚动并单击**以 YAML 文件编辑**。 +1. 编辑 `rancher_kubernetes_engine_config` 参数下的 RKE 选项。 + +## Rancher UI 中的配置选项 + +:::tip + +一些高级配置选项没有在 Rancher UI 表单中开放,但你可以通过在 YAML 中编辑 RKE 集群配置文件来启用这些选项。有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 + +::: + +### Kubernetes 版本 + +这指的是集群节点上安装的 Kubernetes 版本。Rancher 基于 [hyperkube](https://github.com/rancher/hyperkube) 打包了自己的 Kubernetes 版本。 + +有关更多详细信息,请参阅[升级 Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)。 + +### 网络提供商 + +这指的是集群使用的[网络提供商](https://kubernetes.io/docs/concepts/cluster-administration/networking/)。有关不同网络提供商的更多详细信息,请查看我们的[网络常见问题解答](../../../faq/container-network-interface-providers.md)。 + +:::caution + +启动集群后,你无法更改网络提供商。由于 Kubernetes 不允许在网络提供商之间切换,因此,请谨慎选择要使用的网络提供商。使用网络提供商创建集群后,如果你需要更改网络提供商,你将需要拆除整个集群以及其中的所有应用。 + +::: + +Rancher 与以下开箱即用的网络提供商兼容: + +- [Canal](https://github.com/projectcalico/canal) +- [Flannel](https://github.com/coreos/flannel#flannel) +- [Calico](https://docs.projectcalico.org/v3.11/introduction/) +- [Weave](https://github.com/weaveworks/weave) + +:::note Weave 注意事项: + +选择 Weave 作为网络提供商时,Rancher 将通过生成随机密码来自动启用加密。如果你想手动指定密码,请参阅使用[配置文件](#rke-集群配置文件参考)和 [Weave 网络插件选项](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options)来配置集群。 + +::: + +### 项目网络隔离 + +如果你的网络提供商允许项目网络隔离,你可以选择启用或禁用项目间的通信。 + +如果你使用支持执行 Kubernetes 网络策略的 RKE 网络插件(例如 Canal 或 Cisco ACI 插件),则可以使用项目网络隔离。 + +### Kubernetes 云提供商 + +你可以配置 [Kubernetes 云提供商](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md)。如果你想在 Kubernetes 中使用动态配置的[卷和存储](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md),你通常需要选择特定的云提供商。例如,如果你想使用 Amazon EBS,则需要选择 `aws` 云提供商。 + +:::note + +如果你要使用的云提供商未作为选项列出,你需要使用[配置文件选项](#rke-集群配置文件参考)来配置云提供商。请参考 [RKE 云提供商文档](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/)来了解如何配置云提供商。 + +::: + +### 私有镜像仓库 + +集群级别的私有镜像仓库配置仅能用于配置集群。 + +在 Rancher 中设置私有镜像仓库的主要方法有两种:通过[全局默认镜像仓库](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md)中的**设置**选项卡设置全局默认镜像仓库,以及在集群级别设置的高级选项中设置私有镜像仓库。全局默认镜像仓库可以用于离线设置,不需要凭证的镜像仓库。而集群级私有镜像仓库用于所有需要凭证的私有镜像仓库。 + +如果你的私有镜像仓库需要凭证,为了将凭证传递给 Rancher,你需要编辑每个需要从仓库中拉取镜像的集群的集群选项。 + +私有镜像仓库的配置选项能让 Rancher 知道要从哪里拉取用于集群的[系统镜像](https://rancher.com/docs/rke/latest/en/config-options/system-images/)或[附加组件镜像](https://rancher.com/docs/rke/latest/en/config-options/add-ons/)。 + +- **系统镜像**是维护 Kubernetes 集群所需的组件。 +- **附加组件**用于部署多个集群组件,包括网络插件、ingress controller、DNS 提供商或 metrics server。 + +有关为集群配置期间应用的组件设置私有镜像仓库的更多信息,请参阅[私有镜像仓库的 RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/private-registries/)。 + +Rancher v2.6 引入了[为 RKE 集群配置 ECR 镜像仓库](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup)的功能。 + +### 授权集群端点 + +授权集群端点(ACE)可用于直接访问 Kubernetes API server,而无需通过 Rancher 进行通信。 + +:::note + +授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即只适用于 Rancher [使用 RKE](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#配置-kubernetes-集群的工具) 来配置的集群。它不适用于托管在 Kubernetes 提供商中的集群,例如 Amazon 的 EKS。 + +::: + +在 Rancher 启动的 Kubernetes 集群中,它默认启用,使用具有 `controlplane` 角色的节点的 IP 和默认的 Kubernetes 自签名证书。 + +有关授权集群端点的工作原理以及使用的原因,请参阅[架构介绍](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)。 + +我们建议使用具有授权集群端点的负载均衡器。有关详细信息,请参阅[推荐的架构](../../rancher-manager-architecture/architecture-recommendations.md#授权集群端点架构)。 + +### 节点池 + +有关使用 Rancher UI 在 RKE 集群中设置节点池的信息,请参阅[此页面](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md)。 + +### NGINX Ingress + +如果你想使用高可用性配置来发布应用,并且你使用没有原生负载均衡功能的云提供商来托管主机,请启用此选项以在集群中使用 NGINX Ingress。 + +### Metrics Server 监控 + +这是启用或禁用 [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/) 的选项。 + +每个能够使用 RKE 启动集群的云提供商都可以收集指标并监控你的集群节点。如果启用此选项,你可以从你的云提供商门户查看你的节点指标。 + +### 节点上的 Docker 版本 + +表示是否允许节点运行 Rancher 不正式支持的 Docker 版本。 + +如果你选择使用支持的 Docker 版本,Rancher 会禁止 pod 运行在安装了不支持的 Docker 版本的节点上。 + +如需了解各个 Rancher 版本通过了哪些 Docker 版本测试,请参见[支持和维护条款](https://rancher.com/support-maintenance-terms/)。 + +### Docker 根目录 + +如果要添加到集群的节点为 Docker 配置了非默认 Docker 根目录(默认为 `/var/lib/docker`),请在此选项中指定正确的 Docker 根目录。 + +### 默认 Pod 安全策略 + +如果你启用了 **Pod 安全策略支持**,请使用此下拉菜单选择应用于集群的 pod 安全策略。 + +### 节点端口范围 + +更改可用于 [NodePort 服务](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)的端口范围的选项。默认为 `30000-32767`。 + +### 定期 etcd 快照 + +启用或禁用[定期 etcd 快照](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots)的选项。 + +### Agent 环境变量 + +为 [rancher agent](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md) 设置环境变量的选项。你可以使用键值对设置环境变量。如果 Rancher Agent 需要使用代理与 Rancher Server 通信,则可以使用 Agent 环境变量设置 `HTTP_PROXY`,`HTTPS_PROXY` 和 `NO_PROXY` 环境变量。 + +### 更新 ingress-nginx + +使用 Kubernetes 1.16 之前版本创建的集群将具有 `OnDelete`的 `ingress-nginx` `updateStrategy`。使用 Kubernetes 1.16 或更高版本创建的集群将具有 `RollingUpdate`。 + +如果 `ingress-nginx` 的 `updateStrategy` 是 `OnDelete`,则需要删除这些 pod 以获得 deployment 正确的版本。 + +### Cluster Agent 配置和 Fleet Agent 配置 + +你可以为 Cluster Agent 和集群的 Fleet Agent 配置调度字段和资源限制。你可以使用这些字段来自定义容忍度、亲和性规则和资源要求。其他容忍度会被尾附到默认容忍度和 Control Plane 节点污点的列表中。如果你定义了自定义亲和性规则,它们将覆盖全局默认亲和性设置。定义资源要求会在以前没有的地方设置请求或限制。 + +:::note + +有了这个选项,你可以覆盖或删除运行集群所需的规则。我们强烈建议你不要删除或覆盖这些规则和其他亲和性规则,因为这可能会导致不必要的影响: + +- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` 用于 `cattle-cluster-agent` +- `cluster-agent-default-affinity` 用于 `cattle-cluster-agent` +- `fleet-agent-default-affinity` 用于 `fleet-agent` + +::: + +如果将 Rancher 降级到 v2.7.4 或更低版本,你的更改将丢失,而且 Agent 将在没有你的自定义设置的情况下重新部署。重新部署时,Fleet Agent 将回退到使用内置默认值。如果降级期间 Fleet 版本没有更改,则不会立即重新部署。 + + +## RKE 集群配置文件参考 + +高级用户可以创建一个 RKE 配置文件,而不是使用 Rancher UI 来为集群选择 Kubernetes 选项。配置文件可以让你在 RKE 安装中设置任何[可用选项](https://rancher.com/docs/rke/latest/en/config-options/)(`system_images` 配置除外)。使用 Rancher UI 或 API 创建集群时,不支持 `system_images` 选项。 + +有关 YAML 中 RKE Kubernetes 集群的可配置选项的完整参考,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/)。 + +### Rancher 中的配置文件结构 + +RKE(Rancher Kubernetes Engine)是 Rancher 用来配置 Kubernetes 集群的工具。过去,Rancher 的集群配置文件与 [RKE 配置文件](https://rancher.com/docs/rke/latest/en/example-yamls/)的结构是一致的。但由于 Rancher 文件结构发生了变化,因此在 Rancher 中,RKE 集群配置项与非 RKE 配置项是分开的。所以,你的集群配置需要嵌套在集群配置文件中的 `rancher_kubernetes_engine_config` 参数下。使用早期版本的 Rancher 创建的集群配置文件需要针对这种格式进行更新。以下是一个集群配置文件示例: + +
+ 集群配置文件示例 + +```yaml +# +# Cluster Config +# +docker_root_dir: /var/lib/docker +enable_cluster_alerting: false +enable_cluster_monitoring: false +enable_network_policy: false +local_cluster_auth_endpoint: + enabled: true +# +# Rancher Config +# +rancher_kubernetes_engine_config: # Your RKE template config goes here. + addon_job_timeout: 30 + authentication: + strategy: x509 + ignore_docker_version: true +# +# # 目前仅支持 Nginx ingress provider +# # 要禁用 Ingress controller,设置 `provider: none` +# # 要在指定节点上禁用 Ingress,使用 node_selector,例如: +# provider: nginx +# node_selector: +# app: ingress +# + ingress: + provider: nginx + kubernetes_version: v1.15.3-rancher3-1 + monitoring: + provider: metrics-server +# +# If you are using calico on AWS +# +# network: +# plugin: calico +# calico_network_provider: +# cloud_provider: aws +# +# # To specify flannel interface +# +# network: +# plugin: flannel +# flannel_network_provider: +# iface: eth1 +# +# # To specify flannel interface for canal plugin +# +# network: +# plugin: canal +# canal_network_provider: +# iface: eth1 +# + network: + options: + flannel_backend_type: vxlan + plugin: canal +# +# services: +# kube-api: +# service_cluster_ip_range: 10.43.0.0/16 +# kube-controller: +# cluster_cidr: 10.42.0.0/16 +# service_cluster_ip_range: 10.43.0.0/16 +# kubelet: +# cluster_domain: cluster.local +# cluster_dns_server: 10.43.0.10 +# + services: + etcd: + backup_config: + enabled: true + interval_hours: 12 + retention: 6 + safe_timestamp: false + creation: 12h + extra_args: + election-timeout: 5000 + heartbeat-interval: 500 + gid: 0 + retention: 72h + snapshot: false + uid: 0 + kube_api: + always_pull_images: false + pod_security_policy: false + service_node_port_range: 30000-32767 + ssh_agent_auth: false +windows_prefered_cluster: false +``` +
+ +### 默认 DNS 提供商 + +下表显示了默认部署的 DNS 提供商。有关如何配置不同 DNS 提供商的更多信息,请参阅 [DNS 提供商相关的 RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/)。CoreDNS 只能在 Kubernetes v1.12.0 及更高版本上使用。 + +| Rancher 版本 | Kubernetes 版本 | 默认 DNS 提供商 | +|-------------|--------------------|----------------------| +| v2.2.5 及更高版本 | v1.14.0 及更高版本 | CoreDNS | +| v2.2.5 及更高版本 | v1.13.x 及更低版本 | kube-dns | +| v2.2.4 及更低版本 | 任意 | kube-dns | + +## YAML 中的 Rancher 特定参数 + +除了 RKE 配置文件选项外,还有可以在配置文件 (YAML) 中配置的 Rancher 特定设置如下。 + +### docker_root_dir + +请参阅 [Docker 根目录](#docker-根目录)。 + +### enable_cluster_monitoring + +启用或禁用[集群监控](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md)的选项。 + +### enable_network_policy + +启用或禁用项目网络隔离的选项。 + +如果你使用支持执行 Kubernetes 网络策略的 RKE 网络插件(例如 Canal 或 Cisco ACI 插件),则可以使用项目网络隔离。 + +### local_cluster_auth_endpoint + +请参阅[授权集群端点](#授权集群端点)。 + +示例: + +```yaml +local_cluster_auth_endpoint: + enabled: true + fqdn: "FQDN" + ca_certs: |- + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- +``` + +### 自定义网络插件 + +你可以使用 RKE 的[用户定义的附加组件功能](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/)来添加自定义网络插件。部署 Kubernetes 集群之后,你可以定义要部署的任何附加组件。 + +有两种方法可以指定附加组件: + +- [内嵌附加组件](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons) +- [为附加组件引用 YAML 文件](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons) + +有关如何通过编辑 `cluster.yml` 来配置自定义网络插件的示例,请参阅 [RKE 文档](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example)。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md new file mode 100644 index 00000000000..62a27bd3e86 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md @@ -0,0 +1,516 @@ +--- +title: RKE 加固指南 +--- + + + + + + + +本文档提供了针对生产环境的 RKE 集群进行加固的具体指导,以便在使用 Rancher 部署之前进行配置。它概述了满足信息安全中心(Center for Information Security, CIS)Kubernetes benchmark controls 所需的配置和控制。 + +:::note +这份加固指南描述了如何确保你集群中的节点安全。我们建议你在安装 Kubernetes 之前遵循本指南。 +::: + +此加固指南适用于 RKE 集群,并与以下版本的 CIS Kubernetes Benchmark、Kubernetes 和 Rancher 相关联: + +| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | +|-----------------|-----------------------|------------------------------| +| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 | +| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 | +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 至 v1.26 | + +:::note +- 在 Benchmark v1.24 及更高版本中,检查 id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` 可能会失败,因为 `/etc/kubernetes/ssl/kube-ca.pem` 默认设置为 644。 +- 在 Benchmark v1.7 中,不再需要 `--protect-kernel-defaults` (`4.2.6`) 参数,并已被 CIS 删除。 +::: + +有关如何评估加固的 RKE 集群与官方 CIS benchmark 的更多细节,请参考特定 Kubernetes 和 CIS benchmark 版本的 RKE 自我评估指南。 + +## 主机级别要求 + +### 配置 Kernel 运行时参数 + +建议对群集中的所有节点类型使用以下 `sysctl` 配置。在 `/etc/sysctl.d/90-kubelet.conf` 中设置以下参数: + +```ini +vm.overcommit_memory=1 +vm.panic_on_oom=0 +kernel.panic=10 +kernel.panic_on_oops=1 +``` + +运行 `sysctl -p /etc/sysctl.d/90-kubelet.conf` 以启用设置。 + +### 配置 `etcd` 用户和组 + +在安装 RKE 之前,需要设置 **etcd** 服务的用户帐户和组。 + +#### 创建 `etcd` 用户和组 + +要创建 **etcd** 用户和组,请运行以下控制台命令。 +下面的命令示例中使用 `52034` 作为 **uid** 和 **gid** 。 +任何有效且未使用的 **uid** 或 **gid** 都可以代替 `52034`。 + +```bash +groupadd --gid 52034 etcd +useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin +``` + +在通过集群配置文件 `config.yml` 部署RKE时,请更新 `etcd` 用户的 `uid` 和 `gid`: + +```yaml +services: + etcd: + gid: 52034 + uid: 52034 +``` + +## Kubernetes 运行时要求 + +### 配置 `default` Service Account + +#### 设置 `automountServiceAccountToken` 为 `false` 用于 `default` service accounts + +Kubernetes 提供了一个 default service account,供集群工作负载使用,其中没有为 pod 分配特定的 service account。 +如果需要从 pod 访问 Kubernetes API,则应为该 pod 创建特定的 service account,并向该 service account 授予权限。 +应配置 default service account,使其不提供 service account 令牌,并且不应具有任何明确的权限分配。 + +对于标准 RKE 安装上的每个命名空间(包括 `default` 和 `kube-system`),`default` service account 必须包含以下值: + +```yaml +automountServiceAccountToken: false +``` + +将以下配置保存到名为 `account_update.yaml` 的文件中。 + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default +automountServiceAccountToken: false +``` + +创建一个名为 `account_update.yaml` 的 bash 脚本文件。 +确保执行 `chmod +x account_update.sh` 命令,以赋予脚本执行权限。 + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" +done +``` + +执行此脚本将 `account_update.yaml` 配置应用到所有命名空间中的 `default` service account。 + +### 配置网络策略 + +#### 确保所有命名空间都定义了网络策略 + +在同一个 Kubernetes 集群上运行不同的应用程序会带来风险,即某个受感染的应用程序可能会攻击相邻的应用程序。为确保容器只与其预期通信的容器进行通信,网络分段至关重要。网络策略规定了哪些 Pod 可以互相通信,以及与其他网络终端通信的方式。 + +网络策略是命名空间范围的。当在特定命名空间引入网络策略时,所有未被策略允许的流量将被拒绝。然而,如果在命名空间中没有网络策略,那么所有流量将被允许进入和离开该命名空间中的 Pod。要强制执行网络策略,必须启用容器网络接口(container network interface, CNI)插件。本指南使用 [Canal](https://github.com/projectcalico/canal) 来提供策略执行。有关 CNI 提供程序的其他信息可以在[这里](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/)找到。 + +一旦在集群上启用了 CNI 提供程序,就可以应用默认的网络策略。下面提供了一个 **permissive** 的示例供参考。如果你希望允许匹配某个命名空间中所有 Pod 的所有入站和出站流量(即使添加了策略导致某些 Pod 被视为”隔离”),你可以创建一个明确允许该命名空间中所有流量的策略。请将以下配置保存为 `default-allow-all.yaml`。有关网络策略的其他[文档](https://kubernetes.io/docs/concepts/services-networking/network-policies/)可以在 Kubernetes 站点上找到。 + +:::caution +此网络策略只是一个示例,不建议用于生产用途。 +::: + +```yaml +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-allow-all +spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress +``` + +创建一个名为 `apply_networkPolicy_to_all_ns.sh`的 Bash 脚本文件。 + +确保运行 `chmod +x apply_networkPolicy_to_all_ns.sh` 命令,以赋予脚本执行权限。 + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + kubectl apply -f default-allow-all.yaml -n ${namespace} +done +``` + +执行此脚本以将 `default-allow-all.yaml` 配置和 **permissive** 的 `NetworkPolicy` 应用于所有命名空间。 + +## 已知限制 + +- 当注册自定义节点仅提供公共 IP 时,Rancher **exec shell** 和 **查看 pod 日志** 在加固设置中**不起作用**。 此功能需要在注册自定义节点时提供私有 IP。 + +## 加固的 RKE `cluster.yml` 配置参考 + +参考的 `cluster.yml` 文件是由 RKE CLI 使用的,它提供了实现 RKE 加固安装所需的配置。 +RKE [文档](https://rancher.com/docs/rke/latest/en/installation/)提供了有关配置项的更多详细信息。这里参考的 `cluster.yml` 不包括必需的 `nodes` 指令,因为它取决于你的环境。在 RKE 中有关节点配置的文档可以在[这里](https://rancher.com/docs/rke/latest/en/config-options/nodes/)找到。 + +示例 `cluster.yml` 配置文件中包含了一个 Admission Configuration 策略,在 `services.kube-api.admission_configuration` 字段中指定。这个[示例](../../psa-restricted-exemptions.md)策略包含了命名空间的豁免规则,这对于在Rancher中正确运行导入的RKE集群非常必要,类似于Rancher预定义的 [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) 策略。 + +如果你希望使用 RKE 的默认 `restricted` 策略,则将 `services.kube-api.admission_configuration` 字段留空,并将 `services.pod_security_configuration` 设置为 `restricted`。你可以在 [RKE 文档](https://rke.docs.rancher.com/config-options/services/pod-security-admission)中找到更多信息。 + + + + +:::note +如果你打算将一个 RKE 集群导入到 Rancher 中,请参考此[文档](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md)以了解如何配置 PSA 以豁免 Rancher 系统命名空间。 +::: + +```yaml +# 如果你打算在离线环境部署 Kubernetes, +# 请查阅文档以了解如何配置自定义的 RKE 镜像。 +nodes: [] +kubernetes_version: # 定义 RKE 版本 +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + event_rate_limit: + enabled: true + # 如果你在 `admission_configuration` 中设置了自定义策略, + # 请将 `pod_security_configuration` 字段留空。 + # 否则,将其设置为 `restricted` 以使用 RKE 预定义的受限策略, + # 并删除 `admission_configuration` 字段中的所有内容。 + # + # pod_security_configuration: restricted + # + admission_configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: PodSecurity + configuration: + apiVersion: pod-security.admission.config.k8s.io/v1 + kind: PodSecurityConfiguration + defaults: + enforce: "restricted" + enforce-version: "latest" + audit: "restricted" + audit-version: "latest" + warn: "restricted" + warn-version: "latest" + exemptions: + usernames: [] + runtimeClasses: [] + namespaces: [calico-apiserver, + calico-system, + cattle-alerting, + cattle-csp-adapter-system, + cattle-elemental-system, + cattle-epinio-system, + cattle-externalip-system, + cattle-fleet-local-system, + cattle-fleet-system, + cattle-gatekeeper-system, + cattle-global-data, + cattle-global-nt, + cattle-impersonation-system, + cattle-istio, + cattle-istio-system, + cattle-logging, + cattle-logging-system, + cattle-monitoring-system, + cattle-neuvector-system, + cattle-prometheus, + cattle-provisioning-capi-system, + cattle-resources-system, + cattle-sriov-system, + cattle-system, + cattle-ui-plugin-system, + cattle-windows-gmsa-system, + cert-manager, + cis-operator-system, + fleet-default, + ingress-nginx, + istio-system, + kube-node-lease, + kube-public, + kube-system, + longhorn-system, + rancher-alerting-drivers, + security-scan, + tigera-operator] + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + generate_serving_certificate: true +addons: | + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: default-allow-all + spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: default + automountServiceAccountToken: false +``` + + + + +```yaml +# 如果你打算在离线环境部署 Kubernetes, +# 请查阅文档以了解如何配置自定义的 RKE 镜像。 +nodes: [] +kubernetes_version: # 定义 RKE 版本 +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + protect-kernel-defaults: true + generate_serving_certificate: true +addons: | + # Upstream Kubernetes restricted PSP policy + # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + name: restricted-noroot + spec: + privileged: false + # Required to prevent escalations to root. + allowPrivilegeEscalation: false + requiredDropCapabilities: + - ALL + # Allow core volume types. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. + - 'csi' + - 'persistentVolumeClaim' + - 'ephemeral' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # Require the container to run without root privileges. + rule: 'MustRunAsNonRoot' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted-noroot + rules: + - apiGroups: + - extensions + resourceNames: + - restricted-noroot + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted-noroot + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted-noroot + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: default-allow-all + spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: default + automountServiceAccountToken: false +``` + + + + +## 加固后的 RKE 集群模板配置参考 + +参考的 RKE 集群模板提供了实现 Kubernetes 加固安装所需的最低配置。RKE 模板用于提供 Kubernetes 并定义 Rancher 设置。有关安装 RKE 及其模板详情的其他信息,请参考 Rancher [文档](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) 。 + + + + +```yaml +# +# 集群配置 +# +default_pod_security_admission_configuration_template_name: rancher-restricted +enable_network_policy: true +local_cluster_auth_endpoint: + enabled: true +name: # 定义集群名称 + +# +# Rancher 配置 +# +rancher_kubernetes_engine_config: + addon_job_timeout: 45 + authentication: + strategy: x509|webhook + kubernetes_version: # 定义 RKE 版本 + services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: false + secrets_encryption_config: + enabled: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + generate_serving_certificate: true + scheduler: + extra_args: + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` + + + + +```yaml +# +# 集群配置 +# +default_pod_security_policy_template_id: restricted-noroot +enable_network_policy: true +local_cluster_auth_endpoint: + enabled: true +name: # 定义集群名称 + +# +# Rancher 配置 +# +rancher_kubernetes_engine_config: + addon_job_timeout: 45 + authentication: + strategy: x509|webhook + kubernetes_version: # 定义 RKE 版本 + services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + secrets_encryption_config: + enabled: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + protect-kernel-defaults: true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + generate_serving_certificate: true + scheduler: + extra_args: + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` + + + + +## 结论 + +如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md new file mode 100644 index 00000000000..1a3eb88ed98 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md @@ -0,0 +1,2863 @@ +--- +title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 +--- + + + + + + + +本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。 + +本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes: + +| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | +|-----------------|-----------------------|--------------------| +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | + +本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。 + +本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。 + +有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。 + +## 测试方法 + +Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。 + +在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。 + +:::note + +本指南仅涵盖 `automated`(之前称为 `scored`)测试。 + +::: + +### Controls + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a +``` + +**Expected Result**: + +```console +'permissions' is present +``` + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit:** + +```bash +stat -c %a /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +**Audit:** + +```bash +stat -c %U:%G /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'etcd:etcd' is present +``` + +**Returned Value**: + +```console +etcd:etcd +``` + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit Script:** `check_files_owner_in_dir.sh` + +```bash +#!/usr/bin/env bash + +# This script is used to ensure the owner is set to root:root for +# the given directory and all the files in it +# +# inputs: +# $1 = /full/path/to/directory +# +# outputs: +# true/false + +INPUT_DIR=$1 + +if [[ "${INPUT_DIR}" == "" ]]; then + echo "false" + exit +fi + +if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then + echo "false" + exit +fi + +statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) +while read -r statInfoLine; do + f=$(echo ${statInfoLine} | cut -d' ' -f1) + p=$(echo ${statInfoLine} | cut -d' ' -f2) + + if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then + if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "root:root" ]]; then + echo "false" + exit + fi + fi +done <<< "${statInfoLines}" + + +echo "true" +exit + +``` + +**Audit Execution:** + +```bash +./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= +When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.17 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi +``` + +**Expected Result**: + +```console +'provider' is present +``` + +### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true +Cluster provisioned by RKE handles certificate rotation directly through RKE. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file= +--key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--cert-file' is present AND '--key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameters. +--peer-client-file= +--peer-key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameter. +--trusted-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--trusted-ca-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of bootstrap tokens. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Automated) + + +**Result:** pass + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-policy-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +All configuration is passed in as arguments at container run time. + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. + All configuration is passed in as arguments at container run time. + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + + +**Result:** fail + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 600 + +**Audit:** + +```bash +stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root + +**Audit:** + +```bash +stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 600 /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--event-qps' is greater or equal to 0 OR '--event-qps' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** warn + +**Remediation:** +Decide on an appropriate level for this parameter and set it, +either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--pod-max-pids' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** pass + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +**Audit Script:** `check_for_default_sa.sh` + +```bash +#!/bin/bash + +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + +count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) +if [[ ${count_sa} -gt 0 ]]; then + echo "false" + exit +fi + +for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") +do + for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') + do + read kind name <<<$(IFS=","; echo $result) + resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) + if [[ ${resource_count} -gt 0 ]]; then + echo "false" + exit + fi + done +done + + +echo "true" + +``` + +**Audit Execution:** + +```bash +./check_for_default_sa.sh +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the proxy sub-resource of node objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the token sub-resource of serviceaccount objects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Manual) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. diff --git a/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md b/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md new file mode 100644 index 00000000000..7954c8a3b11 --- /dev/null +++ b/versioned_docs/version-2.12/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md @@ -0,0 +1,365 @@ +--- +title: RKE Cluster Configuration Reference +--- + + + + + + + +When Rancher installs Kubernetes, it uses [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) or [RKE2](https://docs.rke2.io/) as the Kubernetes distribution. + +This section covers the configuration options that are available in Rancher for a new or existing RKE Kubernetes cluster. + + +## Overview + +You can configure the Kubernetes options one of two ways: + +- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster. +- [Cluster Config File](#rke-cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. + +The RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the section about the [cluster config file.](#rke-cluster-config-file-reference) + +In [clusters launched by RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md), you can edit any of the remaining options that follow. + +For an example of RKE config file syntax, see the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/). + +The forms in the Rancher UI don't include all advanced options for configuring RKE. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) + +## Editing Clusters with a Form in the Rancher UI + +To edit your cluster, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. Go to the cluster you want to configure and click **⋮ > Edit Config**. + + +## Editing Clusters with YAML + +Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. + +RKE clusters (also called RKE1 clusters) are edited differently than RKE2 and K3s clusters. + +To edit an RKE config file directly from the Rancher UI, + +1. Click **☰ > Cluster Management**. +1. Go to the RKE cluster you want to configure. Click and click **⋮ > Edit Config**. This take you to the RKE configuration form. Note: Because cluster provisioning changed in Rancher 2.6, the **⋮ > Edit as YAML** can be used for configuring RKE2 clusters, but it can't be used for editing RKE1 configuration. +1. In the configuration form, scroll down and click **Edit as YAML**. +1. Edit the RKE options under the `rancher_kubernetes_engine_config` directive. + +## Configuration Options in the Rancher UI + +:::tip + +Some advanced configuration options are not exposed in the Rancher UI forms, but they can be enabled by editing the RKE cluster configuration file in YAML. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) + +::: + +### Kubernetes Version + +The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube). + +For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md). + +### Network Provider + +The [Network Provider](https://kubernetes.io/docs/concepts/cluster-administration/networking/) that the cluster uses. For more details on the different networking providers, please view our [Networking FAQ](../../../faq/container-network-interface-providers.md). + +:::caution + +After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn't allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications. + +::: + +Out of the box, Rancher is compatible with the following network providers: + +- [Canal](https://github.com/projectcalico/canal) +- [Flannel](https://github.com/coreos/flannel#flannel) +- [Calico](https://docs.projectcalico.org/v3.11/introduction/) +- [Weave](https://github.com/weaveworks/weave) + + + +:::note Notes on Weave: + +When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a [Config File](#rke-cluster-config-file-reference) and the [Weave Network Plug-in Options](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options). + +::: + +### Project Network Isolation + +If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication. + +Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. + +### Kubernetes Cloud Providers + +You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider. + +:::note + +If the cloud provider you want to use is not listed as an option, you will need to use the [config file option](#rke-cluster-config-file-reference) to configure the cloud provider. Please reference the [RKE cloud provider documentation](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/) on how to configure the cloud provider. + +::: + +### Private Registries + +The cluster-level private registry configuration is only used for provisioning clusters. + +There are two main ways to set up private registries in Rancher: by setting up the [global default registry](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md) through the **Settings** tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials. + +If your private registry requires credentials, you need to pass the credentials to Rancher by editing the cluster options for each cluster that needs to pull images from the registry. + +The private registry configuration option tells Rancher where to pull the [system images](https://rancher.com/docs/rke/latest/en/config-options/system-images/) or [addon images](https://rancher.com/docs/rke/latest/en/config-options/add-ons/) that will be used in your cluster. + +- **System images** are components needed to maintain the Kubernetes cluster. +- **Add-ons** are used to deploy several cluster components, including network plug-ins, the ingress controller, the DNS provider, or the metrics server. + +For more information on setting up a private registry for components applied during the provisioning of the cluster, see the [RKE documentation on private registries](https://rancher.com/docs/rke/latest/en/config-options/private-registries/). + +Rancher v2.6 introduced the ability to configure [ECR registries for RKE clusters](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup). + +### Authorized Cluster Endpoint + +Authorized Cluster Endpoint (ACE) can be used to directly access the Kubernetes API server, without requiring communication through Rancher. + +:::note + +ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS. + +::: + +ACE must be set up [manually](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#authorized-cluster-endpoint-support-for-rke2-and-k3s-clusters) on RKE2 and K3s clusters. In RKE, ACE is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the `controlplane` role and the default Kubernetes self-signed certificates. + +For more detail on how an authorized cluster endpoint works and why it is used, refer to the [architecture section.](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) + +We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace) + +### Node Pools + +For information on using the Rancher UI to set up node pools in an RKE cluster, refer to [this page.](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md) + +### NGINX Ingress + +If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster. + +### Metrics Server Monitoring + +Option to enable or disable [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/). + +Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal. + +You must have an existing Pod Security Policy configured before you can use this option. + +### Docker Version on Nodes + +Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support. + +If you choose to require a supported Docker version, Rancher will stop pods from running on nodes that don't have a supported Docker version installed. + +For details on which Docker versions were tested with each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) + +### Docker Root Directory + +If the nodes you are adding to the cluster have Docker configured with a non-default Docker Root Directory (default is `/var/lib/docker`), specify the correct Docker Root Directory in this option. + +### Default Pod Security Policy + +If you enable **Pod Security Policy Support**, use this drop-down to choose the pod security policy that's applied to the cluster. + +### Node Port Range + +Option to change the range of ports that can be used for [NodePort services](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). Default is `30000-32767`. + +### Recurring etcd Snapshots + +Option to enable or disable [recurring etcd snapshots](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots). + +### Agent Environment Variables + +Option to set environment variables for [rancher agents](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables. + +### Updating ingress-nginx + +Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`. + +If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment. + +### Cluster Agent Configuration and Fleet Agent Configuration + +You can configure the scheduling fields and resource limits for the Cluster Agent and the cluster's Fleet Agent. You can use these fields to customize tolerations, affinity rules, and resource requirements. Additional tolerations are appended to a list of default tolerations and control plane node taints. If you define custom affinity rules, they override the global default affinity setting. Defining resource requirements sets requests or limits where there previously were none. + +:::note + +With this option, it's possible to override or remove rules that are required for the functioning of the cluster. We strongly recommend against removing or overriding these and any other affinity rules, as this may cause unwanted side effects: + +- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` for `cattle-cluster-agent` +- `cluster-agent-default-affinity` for `cattle-cluster-agent` +- `fleet-agent-default-affinity` for `fleet-agent` + +::: + +If you downgrade Rancher to v2.7.4 or below, your changes will be lost and the agents will re-deploy without your customizations. The Fleet agent will fallback to using its built-in default values when it re-deploys. If the Fleet version doesn't change during the downgrade, the re-deploy won't be immediate. + + +## RKE Cluster Config File Reference + +Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the [options available](https://rancher.com/docs/rke/latest/en/config-options/) in an RKE installation, except for `system_images` configuration. The `system_images` option is not supported when creating a cluster with the Rancher UI or API. + +For the complete reference for configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/) + +### Config File Structure in Rancher + +RKE (Rancher Kubernetes Engine) is the tool that Rancher uses to provision Kubernetes clusters. Rancher's cluster config files used to have the same structure as [RKE config files,](https://rancher.com/docs/rke/latest/en/example-yamls/) but the structure changed so that in Rancher, RKE cluster config items are separated from non-RKE config items. Therefore, configuration for your cluster needs to be nested under the `rancher_kubernetes_engine_config` directive in the cluster config file. Cluster config files created with earlier versions of Rancher will need to be updated for this format. An example cluster config file is included below. + +
+ Example Cluster Config File + +```yaml +# +# Cluster Config +# +docker_root_dir: /var/lib/docker +enable_cluster_alerting: false +enable_cluster_monitoring: false +enable_network_policy: false +local_cluster_auth_endpoint: + enabled: true +# +# Rancher Config +# +rancher_kubernetes_engine_config: # Your RKE template config goes here. + addon_job_timeout: 30 + authentication: + strategy: x509 + ignore_docker_version: true +# +# # Currently only nginx ingress provider is supported. +# # To disable ingress controller, set `provider: none` +# # To enable ingress on specific nodes, use the node_selector, eg: +# provider: nginx +# node_selector: +# app: ingress +# + ingress: + provider: nginx + kubernetes_version: v1.15.3-rancher3-1 + monitoring: + provider: metrics-server +# +# If you are using calico on AWS +# +# network: +# plugin: calico +# calico_network_provider: +# cloud_provider: aws +# +# # To specify flannel interface +# +# network: +# plugin: flannel +# flannel_network_provider: +# iface: eth1 +# +# # To specify flannel interface for canal plugin +# +# network: +# plugin: canal +# canal_network_provider: +# iface: eth1 +# + network: + options: + flannel_backend_type: vxlan + plugin: canal +# +# services: +# kube-api: +# service_cluster_ip_range: 10.43.0.0/16 +# kube-controller: +# cluster_cidr: 10.42.0.0/16 +# service_cluster_ip_range: 10.43.0.0/16 +# kubelet: +# cluster_domain: cluster.local +# cluster_dns_server: 10.43.0.10 +# + services: + etcd: + backup_config: + enabled: true + interval_hours: 12 + retention: 6 + safe_timestamp: false + creation: 12h + extra_args: + election-timeout: 5000 + heartbeat-interval: 500 + gid: 0 + retention: 72h + snapshot: false + uid: 0 + kube_api: + always_pull_images: false + pod_security_policy: false + service_node_port_range: 30000-32767 + ssh_agent_auth: false +windows_prefered_cluster: false +``` +
+ +### Default DNS provider + +The table below indicates what DNS provider is deployed by default. See [RKE documentation on DNS provider](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/) for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher. + +| Rancher version | Kubernetes version | Default DNS provider | +|-------------|--------------------|----------------------| +| v2.2.5 and higher | v1.14.0 and higher | CoreDNS | +| v2.2.5 and higher | v1.13.x and lower | kube-dns | +| v2.2.4 and lower | any | kube-dns | + +## Rancher Specific Parameters in YAML + +Besides the RKE config file options, there are also Rancher specific settings that can be configured in the Config File (YAML): + +### docker_root_dir + +See [Docker Root Directory](#docker-root-directory). + +### enable_cluster_monitoring + +Option to enable or disable [Cluster Monitoring](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md). + +### enable_network_policy + +Option to enable or disable Project Network Isolation. + +Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. + +### local_cluster_auth_endpoint + +See [Authorized Cluster Endpoint](#authorized-cluster-endpoint). + +Example: + +```yaml +local_cluster_auth_endpoint: + enabled: true + fqdn: "FQDN" + ca_certs: |- + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- +``` + +### Custom Network Plug-in + +You can add a custom network plug-in by using the [user-defined add-on functionality](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/) of RKE. You define any add-on that you want deployed after the Kubernetes cluster is deployed. + +There are two ways that you can specify an add-on: + +- [In-line Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons) +- [Referencing YAML Files for Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons) + +For an example of how to configure a custom network plug-in by editing the `cluster.yml`, refer to the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example) \ No newline at end of file diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md new file mode 100644 index 00000000000..35ecd76ead2 --- /dev/null +++ b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md @@ -0,0 +1,513 @@ +--- +title: RKE Hardening Guides +--- + + + + + + + +This document provides prescriptive guidance for how to harden an RKE cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls. + +:::note +This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes. +::: + +This hardening guide is intended to be used for RKE clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +| Rancher Version | CIS Benchmark Version | Kubernetes Version | +|-----------------|-----------------------|------------------------------| +| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 | +| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 | +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 up to v1.26 | + +:::note +- In Benchmark v1.24 and later, check id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` might fail, as `/etc/kubernetes/ssl/kube-ca.pem` is set to 644 by default. +- In Benchmark v1.7, the `--protect-kernel-defaults` (`4.2.6`) parameter isn't required anymore, and was removed by CIS. +::: + +For more details on how to evaluate a hardened RKE cluster against the official CIS benchmark, refer to the RKE self-assessment guides for specific Kubernetes and CIS benchmark versions. + +## Host-level requirements + +### Configure Kernel Runtime Parameters + +The following `sysctl` configuration is recommended for all nodes types in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: + +```ini +vm.overcommit_memory=1 +vm.panic_on_oom=0 +kernel.panic=10 +kernel.panic_on_oops=1 +``` + +Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. + +### Configure `etcd` user and group + +A user account and group for the **etcd** service is required to be set up before installing RKE. + +#### Create `etcd` user and group + +To create the **etcd** user and group run the following console commands. +The commands below use `52034` for **uid** and **gid** for example purposes. +Any valid unused **uid** or **gid** could also be used in lieu of `52034`. + +```bash +groupadd --gid 52034 etcd +useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin +``` + +When deploying RKE through its cluster configuration `config.yml` file, update the `uid` and `gid` of the `etcd` user: + +```yaml +services: + etcd: + gid: 52034 + uid: 52034 +``` + +## Kubernetes runtime requirements + +### Configure `default` Service Account + +#### Set `automountServiceAccountToken` to `false` for `default` service accounts + +Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. +Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. +The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. + +For each namespace including `default` and `kube-system` on a standard RKE install, the `default` service account must include this value: + +```yaml +automountServiceAccountToken: false +``` + +Save the following configuration to a file called `account_update.yaml`. + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default +automountServiceAccountToken: false +``` + +Create a bash script file called `account_update.sh`. +Be sure to `chmod +x account_update.sh` so the script has execute permissions. + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" +done +``` + +Execute this script to apply the `account_update.yaml` configuration to `default` service account in all namespaces. + +### Configure Network Policy + +#### Ensure that all Namespaces have Network Policies defined + +Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints. + +Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a container network interface (CNI) plugin must be enabled. This guide uses [Canal](https://github.com/projectcalico/canal) to provide the policy enforcement. Additional information about CNI providers can be found [here](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/). + +Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a **permissive** example is provided below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following configuration as `default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) about network policies can be found on the Kubernetes site. + +:::caution +This network policy is just an example and is not recommended for production use. +::: + +```yaml +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-allow-all +spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress +``` + +Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to `chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. + +```bash +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do + kubectl apply -f default-allow-all.yaml -n ${namespace} +done +``` + +Execute this script to apply the `default-allow-all.yaml` configuration with the **permissive** `NetworkPolicy` to all namespaces. + +## Known Limitations + +- Rancher **exec shell** and **view logs** for pods are **not** functional in a hardened setup when only a public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. + +## Reference Hardened RKE `cluster.yml` Configuration + +The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened installation of RKE. RKE [documentation](https://rancher.com/docs/rke/latest/en/installation/) provides additional details about the configuration items. This reference `cluster.yml` does not include the required `nodes` directive which will vary depending on your environment. Documentation for node configuration in RKE can be found [here](https://rancher.com/docs/rke/latest/en/config-options/nodes/). + +The example `cluster.yml` configuration file contains an Admission Configuration policy in the `services.kube-api.admission_configuration` field. This [sample](../../psa-restricted-exemptions.md) policy contains the namespace exemptions necessary for an imported RKE cluster to run properly in Rancher, similar to Rancher's pre-defined [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) policy. + +If you prefer to use RKE's default `restricted` policy, then leave the `services.kube-api.admission_configuration` field empty and set `services.pod_security_configuration` to `restricted`. See [the RKE docs](https://rke.docs.rancher.com/config-options/services/pod-security-admission) for more information. + + + + +:::note +If you intend to import an RKE cluster into Rancher, please consult the [documentation](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for how to configure the PSA to exempt Rancher system namespaces. +::: + +```yaml +# If you intend to deploy Kubernetes in an air-gapped environment, +# please consult the documentation on how to configure custom RKE images. +nodes: [] +kubernetes_version: # Define RKE version +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + event_rate_limit: + enabled: true + # Leave `pod_security_configuration` out if you are setting a + # custom policy in `admission_configuration`. Otherwise set + # it to `restricted` to use RKE's pre-defined restricted policy, + # and remove everything inside `admission_configuration` field. + # + # pod_security_configuration: restricted + # + admission_configuration: + apiVersion: apiserver.config.k8s.io/v1 + kind: AdmissionConfiguration + plugins: + - name: PodSecurity + configuration: + apiVersion: pod-security.admission.config.k8s.io/v1 + kind: PodSecurityConfiguration + defaults: + enforce: "restricted" + enforce-version: "latest" + audit: "restricted" + audit-version: "latest" + warn: "restricted" + warn-version: "latest" + exemptions: + usernames: [] + runtimeClasses: [] + namespaces: [calico-apiserver, + calico-system, + cattle-alerting, + cattle-csp-adapter-system, + cattle-elemental-system, + cattle-epinio-system, + cattle-externalip-system, + cattle-fleet-local-system, + cattle-fleet-system, + cattle-gatekeeper-system, + cattle-global-data, + cattle-global-nt, + cattle-impersonation-system, + cattle-istio, + cattle-istio-system, + cattle-logging, + cattle-logging-system, + cattle-monitoring-system, + cattle-neuvector-system, + cattle-prometheus, + cattle-provisioning-capi-system, + cattle-resources-system, + cattle-sriov-system, + cattle-system, + cattle-ui-plugin-system, + cattle-windows-gmsa-system, + cert-manager, + cis-operator-system, + fleet-default, + ingress-nginx, + istio-system, + kube-node-lease, + kube-public, + kube-system, + longhorn-system, + rancher-alerting-drivers, + security-scan, + tigera-operator] + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + generate_serving_certificate: true +addons: | + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: default-allow-all + spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: default + automountServiceAccountToken: false +``` + + + + +```yaml +# If you intend to deploy Kubernetes in an air-gapped environment, +# please consult the documentation on how to configure custom RKE images. +nodes: [] +kubernetes_version: # Define RKE version +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + protect-kernel-defaults: true + generate_serving_certificate: true +addons: | + # Upstream Kubernetes restricted PSP policy + # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + name: restricted-noroot + spec: + privileged: false + # Required to prevent escalations to root. + allowPrivilegeEscalation: false + requiredDropCapabilities: + - ALL + # Allow core volume types. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use. + - 'csi' + - 'persistentVolumeClaim' + - 'ephemeral' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # Require the container to run without root privileges. + rule: 'MustRunAsNonRoot' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted-noroot + rules: + - apiGroups: + - extensions + resourceNames: + - restricted-noroot + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted-noroot + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted-noroot + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: networking.k8s.io/v1 + kind: NetworkPolicy + metadata: + name: default-allow-all + spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: default + automountServiceAccountToken: false +``` + + + + +## Reference Hardened RKE Cluster Template Configuration + +The reference RKE cluster template provides the minimum required configuration to achieve a hardened installation of Kubernetes. RKE templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher [documentation](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) for additional information about installing RKE and its template details. + + + + +```yaml +# +# Cluster Config +# +default_pod_security_admission_configuration_template_name: rancher-restricted +enable_network_policy: true +local_cluster_auth_endpoint: + enabled: true +name: # Define cluster name + +# +# Rancher Config +# +rancher_kubernetes_engine_config: + addon_job_timeout: 45 + authentication: + strategy: x509|webhook + kubernetes_version: # Define RKE version + services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: false + secrets_encryption_config: + enabled: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + generate_serving_certificate: true + scheduler: + extra_args: + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` + + + + +```yaml +# +# Cluster Config +# +default_pod_security_policy_template_id: restricted-noroot +enable_network_policy: true +local_cluster_auth_endpoint: + enabled: true +name: # Define cluster name + +# +# Rancher Config +# +rancher_kubernetes_engine_config: + addon_job_timeout: 45 + authentication: + strategy: x509|webhook + kubernetes_version: # Define RKE version + services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + secrets_encryption_config: + enabled: true + kube-controller: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + kubelet: + extra_args: + feature-gates: RotateKubeletServerCertificate=true + protect-kernel-defaults: true + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + generate_serving_certificate: true + scheduler: + extra_args: + tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +``` + + + + +## Conclusion + +If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster. \ No newline at end of file diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md new file mode 100644 index 00000000000..e8bf71f7c78 --- /dev/null +++ b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md @@ -0,0 +1,2864 @@ +--- +title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 +--- + + + + + + + +This document is a companion to the [RKE Hardening Guide](rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark. + + +This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes: + +| Rancher Version | CIS Benchmark Version | Kubernetes Version | +|-----------------|-----------------------|--------------------| +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | + +This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`. + +This document is for Rancher operators, security teams, auditors and decision makers. + +For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). + +## Testing Methodology + +Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. + +Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. + +:::note + +This guide only covers `automated` (previously called `scored`) tests. + +::: + +### Controls + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a +``` + +**Expected Result**: + +```console +'permissions' is present +``` + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit:** + +```bash +stat -c %a /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +**Audit:** + +```bash +stat -c %U:%G /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'etcd:etcd' is present +``` + +**Returned Value**: + +```console +etcd:etcd +``` + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit Script:** `check_files_owner_in_dir.sh` + +```bash +#!/usr/bin/env bash + +# This script is used to ensure the owner is set to root:root for +# the given directory and all the files in it +# +# inputs: +# $1 = /full/path/to/directory +# +# outputs: +# true/false + +INPUT_DIR=$1 + +if [[ "${INPUT_DIR}" == "" ]]; then + echo "false" + exit +fi + +if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then + echo "false" + exit +fi + +statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) +while read -r statInfoLine; do + f=$(echo ${statInfoLine} | cut -d' ' -f1) + p=$(echo ${statInfoLine} | cut -d' ' -f2) + + if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then + if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "root:root" ]]; then + echo "false" + exit + fi + fi +done <<< "${statInfoLines}" + + +echo "true" +exit + +``` + +**Audit Execution:** + +```bash +./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= +When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.17 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi +``` + +**Expected Result**: + +```console +'provider' is present +``` + +### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true +Cluster provisioned by RKE handles certificate rotation directly through RKE. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file= +--key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--cert-file' is present AND '--key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameters. +--peer-client-file= +--peer-key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameter. +--trusted-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--trusted-ca-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of bootstrap tokens. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Automated) + + +**Result:** pass + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-policy-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +All configuration is passed in as arguments at container run time. + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. + All configuration is passed in as arguments at container run time. + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + + +**Result:** fail + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 600 + +**Audit:** + +```bash +stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root + +**Audit:** + +```bash +stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 600 /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--event-qps' is greater or equal to 0 OR '--event-qps' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** warn + +**Remediation:** +Decide on an appropriate level for this parameter and set it, +either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--pod-max-pids' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** pass + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +**Audit Script:** `check_for_default_sa.sh` + +```bash +#!/bin/bash + +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + +count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) +if [[ ${count_sa} -gt 0 ]]; then + echo "false" + exit +fi + +for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") +do + for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') + do + read kind name <<<$(IFS=","; echo $result) + resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) + if [[ ${resource_count} -gt 0 ]]; then + echo "false" + exit + fi + done +done + + +echo "true" + +``` + +**Audit Execution:** + +```bash +./check_for_default_sa.sh +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the proxy sub-resource of node objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the token sub-resource of serviceaccount objects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Manual) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. From 0b2c7421c40d9dd1b92c1b618fb2d93de365700d Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Thu, 24 Jul 2025 14:55:38 -0700 Subject: [PATCH 40/57] Revert changes made to amazon-ec2.md --- .../node-template-configuration/amazon-ec2.md | 2 +- .../node-template-configuration/amazon-ec2.md | 2 +- .../node-template-configuration/amazon-ec2.md | 2 +- .../node-template-configuration/amazon-ec2.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md b/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md index 09a4a1a3361..e4b77c5174c 100644 --- a/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md +++ b/docs/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md @@ -38,7 +38,7 @@ Choose the default security group or configure a security group. Please refer to [Amazon EC2 security group when using Node Driver](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group. -If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-nodes). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). +If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). ### Instance Options diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md index ae6d9b5dbae..5f222f19325 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md @@ -34,7 +34,7 @@ title: EC2 节点模板配置 请参考[使用主机驱动时的 Amazon EC2 安全组](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-安全组),了解 `rancher-nodes` 安全组中创建的规则。 -如果你自行为 EC2 实例提供安全组,Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-节点)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。 +如果你自行为 EC2 实例提供安全组,Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rke-上-rancher-server-节点的端口)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。 ## 实例选项 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md index ae6d9b5dbae..5f222f19325 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md @@ -34,7 +34,7 @@ title: EC2 节点模板配置 请参考[使用主机驱动时的 Amazon EC2 安全组](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-安全组),了解 `rancher-nodes` 安全组中创建的规则。 -如果你自行为 EC2 实例提供安全组,Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-节点)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。 +如果你自行为 EC2 实例提供安全组,Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rke-上-rancher-server-节点的端口)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。 ## 实例选项 diff --git a/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md b/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md index 09a4a1a3361..e4b77c5174c 100644 --- a/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md +++ b/versioned_docs/version-2.12/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2.md @@ -38,7 +38,7 @@ Choose the default security group or configure a security group. Please refer to [Amazon EC2 security group when using Node Driver](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group. -If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-nodes). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). +If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). ### Instance Options From adfe0d46cdb0823ecf18627c6ecf6aa877d768e3 Mon Sep 17 00:00:00 2001 From: Sunil Singh Date: Fri, 25 Jul 2025 11:05:59 -0700 Subject: [PATCH 41/57] Updating sidebars.js as extra lines were mistakenly added in PR #1867 Signed-off-by: Sunil Singh --- sidebars.js | 3 --- 1 file changed, 3 deletions(-) diff --git a/sidebars.js b/sidebars.js index 9aa046f49b1..952660f8784 100644 --- a/sidebars.js +++ b/sidebars.js @@ -396,9 +396,6 @@ const sidebars = { "how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes", - "how-to-guides/new-user-guides/manage-clusters/add-a-pod-security-policy", - - "how-to-guides/new-user-guides/manage-clusters/assign-pod-security-policies", ], }, { From 1b39a41881f97c0f4a004ec0ce5091708519e4fa Mon Sep 17 00:00:00 2001 From: Sunil Singh Date: Fri, 25 Jul 2025 12:00:17 -0700 Subject: [PATCH 42/57] Updating redirects from action error list Signed-off-by: Sunil Singh --- docusaurus.config.js | 28 ---------------------------- 1 file changed, 28 deletions(-) diff --git a/docusaurus.config.js b/docusaurus.config.js index 134fba0c8c3..832f280a6fd 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -1188,10 +1188,6 @@ module.exports = { to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/example-use-cases", from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/example-use-cases", }, - { - to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies", - from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies", - }, { to: "/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry", from: "/how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry", @@ -1279,14 +1275,6 @@ module.exports = { to: "/how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes", from: "/how-to-guides/advanced-user-guides/manage-clusters/clean-cluster-nodes", }, - { - to: "/how-to-guides/new-user-guides/manage-clusters/add-a-pod-security-policy", - from: "/how-to-guides/advanced-user-guides/manage-clusters/add-a-pod-security-policy", - }, - { - to: "/how-to-guides/new-user-guides/manage-clusters/assign-pod-security-policies", - from: "/how-to-guides/advanced-user-guides/manage-clusters/assign-pod-security-policies", - }, { to: "/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster", from: "/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster", @@ -1447,10 +1435,6 @@ module.exports = { to: "/integrations-in-rancher/istio/disable-istio", from: "/explanations/integrations-in-rancher/istio/disable-istio", }, - { - to: "/integrations-in-rancher/istio/configuration-options/pod-security-policies", - from: "/explanations/integrations-in-rancher/istio/configuration-options/pod-security-policies", - }, { to: "/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations", from: "/explanations/integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations", @@ -1515,26 +1499,14 @@ module.exports = { to: "/integrations-in-rancher/neuvector", from: "/explanations/integrations-in-rancher/neuvector", }, // Redirects for restructure from PR #234 (end) - { - to: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24", - }, { to: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", from: "/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25", }, - { - to: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24", - }, { to: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", from: "/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25", }, - { - to: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24", - from: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24", - }, { to: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27", from: "/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25", From dba2f7a80c287f28f6876133f54d59e2269491ff Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Fri, 25 Jul 2025 14:05:50 -0700 Subject: [PATCH 43/57] [2.12.0] versions update --- src/pages/versions.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/src/pages/versions.md b/src/pages/versions.md index 9e4039a8ade..eccae53b13f 100644 --- a/src/pages/versions.md +++ b/src/pages/versions.md @@ -5,6 +5,27 @@ title: Rancher Documentation Versions ### Current Versions +Here you can find links to supporting documentation for the current released version of Rancher v2.12, and its availability for [Rancher Prime](/v2.12/getting-started/quick-start-guides/deploy-rancher-manager/prime) and the Community version of Rancher: + + + + + + + + + + + + + + + + + + +
VersionDocumentationRelease NotesSupport MatrixPrimeCommunity
v2.12.0DocumentationRelease Notes
N/A
N/A
+ Here you can find links to supporting documentation for the current released version of Rancher v2.11, and its availability for [Rancher Prime](/v2.11/getting-started/quick-start-guides/deploy-rancher-manager/prime) and the Community version of Rancher: From d03420a2812b30faba84250dabd969873d953cd2 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Fri, 25 Jul 2025 14:08:33 -0700 Subject: [PATCH 44/57] [2.12.0] webhook update --- docs/reference-guides/rancher-webhook.md | 5 +---- .../version-2.12/reference-guides/rancher-webhook.md | 4 +--- 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/docs/reference-guides/rancher-webhook.md b/docs/reference-guides/rancher-webhook.md index 760e51bd32d..220be0f17eb 100644 --- a/docs/reference-guides/rancher-webhook.md +++ b/docs/reference-guides/rancher-webhook.md @@ -20,10 +20,7 @@ Each Rancher version is designed to be compatible with a single version of the w | Rancher Version | Webhook Version | Availability in Prime | Availability in Community | |-----------------|-----------------|-----------------------|---------------------------| -| v2.11.3 | v0.7.3 | ✓ | ✓ | -| v2.11.2 | v0.7.2 | ✓ | ✓ | -| v2.11.1 | v0.7.1 | ✓ | ✓ | -| v2.11.0 | v0.7.0 | ✗ | ✓ | +| v2.12.0 | v0.8.0 | ✗ | ✓ | ## Why Do We Need It? diff --git a/versioned_docs/version-2.12/reference-guides/rancher-webhook.md b/versioned_docs/version-2.12/reference-guides/rancher-webhook.md index 27c9b2b2e12..220be0f17eb 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-webhook.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-webhook.md @@ -20,9 +20,7 @@ Each Rancher version is designed to be compatible with a single version of the w | Rancher Version | Webhook Version | Availability in Prime | Availability in Community | |-----------------|-----------------|-----------------------|---------------------------| -| v2.11.2 | v0.7.2 | ✓ | ✓ | -| v2.11.1 | v0.7.1 | ✓ | ✓ | -| v2.11.0 | v0.7.0 | ✗ | ✓ | +| v2.12.0 | v0.8.0 | ✗ | ✓ | ## Why Do We Need It? From e718a060ad962570d1f6a9e0406c67438b2ea510 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Fri, 25 Jul 2025 14:10:49 -0700 Subject: [PATCH 45/57] [2.12.0] CSP adapter update --- .../aws-cloud-marketplace/install-adapter.md | 5 +---- .../aws-cloud-marketplace/install-adapter.md | 4 +--- 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md index 9d6fd0b1876..997aaa48171 100644 --- a/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md +++ b/docs/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md @@ -19,10 +19,7 @@ In order to deploy and run the adapter successfully, you need to ensure its vers | Rancher Version | Adapter Version | |-----------------|------------------| -| v2.11.3 | v106.0.0+up6.0.0 | -| v2.11.2 | v106.0.0+up6.0.0 | -| v2.11.1 | v106.0.0+up6.0.0 | -| v2.11.0 | v106.0.0+up6.0.0 | +| v2.12.0 | 107.0.0+up7.0.0 | ### 1. Gain Access to the Local Cluster diff --git a/versioned_docs/version-2.12/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md b/versioned_docs/version-2.12/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md index 1f1d6335a21..997aaa48171 100644 --- a/versioned_docs/version-2.12/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md +++ b/versioned_docs/version-2.12/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md @@ -19,9 +19,7 @@ In order to deploy and run the adapter successfully, you need to ensure its vers | Rancher Version | Adapter Version | |-----------------|------------------| -| v2.11.2 | v106.0.0+up6.0.0 | -| v2.11.1 | v106.0.0+up6.0.0 | -| v2.11.0 | v106.0.0+up6.0.0 | +| v2.12.0 | 107.0.0+up7.0.0 | ### 1. Gain Access to the Local Cluster From 366c4db8cd92abc164018d54af3b23186c7de9c5 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Fri, 25 Jul 2025 14:13:12 -0700 Subject: [PATCH 46/57] [2.12.0] depreciated features update --- docs/faq/deprecated-features.md | 5 +---- versioned_docs/version-2.12/faq/deprecated-features.md | 4 +--- 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/docs/faq/deprecated-features.md b/docs/faq/deprecated-features.md index 959e3edf1d2..50ae2b5e5e1 100644 --- a/docs/faq/deprecated-features.md +++ b/docs/faq/deprecated-features.md @@ -16,10 +16,7 @@ Rancher will publish deprecated features as part of the [release notes](https:// | Patch Version | Release Date | |---------------|---------------| -| [2.11.3](https://github.com/rancher/rancher/releases/tag/v2.11.3) | June 25, 2025 | -| [2.11.2](https://github.com/rancher/rancher/releases/tag/v2.11.2) | May 22, 2025 | -| [2.11.1](https://github.com/rancher/rancher/releases/tag/v2.11.1) | Apr 24, 2025 | -| [2.11.0](https://github.com/rancher/rancher/releases/tag/v2.11.0) | Mar 31, 2025 | +| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | July 31, 2025 | ## What can I expect when a feature is marked for deprecation? diff --git a/versioned_docs/version-2.12/faq/deprecated-features.md b/versioned_docs/version-2.12/faq/deprecated-features.md index 221a39ae343..50ae2b5e5e1 100644 --- a/versioned_docs/version-2.12/faq/deprecated-features.md +++ b/versioned_docs/version-2.12/faq/deprecated-features.md @@ -16,9 +16,7 @@ Rancher will publish deprecated features as part of the [release notes](https:// | Patch Version | Release Date | |---------------|---------------| -| [2.11.2](https://github.com/rancher/rancher/releases/tag/v2.11.2) | May 22, 2025 | -| [2.11.1](https://github.com/rancher/rancher/releases/tag/v2.11.1) | Apr 24, 2025 | -| [2.11.0](https://github.com/rancher/rancher/releases/tag/v2.11.0) | Mar 31, 2025 | +| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | July 31, 2025 | ## What can I expect when a feature is marked for deprecation? From 93212945d6b07df65d850dd1e6ca544b74000d22 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Fri, 25 Jul 2025 14:15:35 -0700 Subject: [PATCH 47/57] zh [2.12.0] webhook update --- .../current/reference-guides/rancher-webhook.md | 5 +---- .../version-2.12/reference-guides/rancher-webhook.md | 4 +--- 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-webhook.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-webhook.md index a4cadfd26a9..2feb30e26ea 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-webhook.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-webhook.md @@ -20,10 +20,7 @@ Rancher 将 Rancher-Webhook 作为单独的 deployment 和服务部署在 local | Rancher Version | Webhook Version | Availability in Prime | Availability in Community | |-----------------|-----------------|-----------------------|---------------------------| -| v2.11.3 | v0.7.3 | ✓ | ✓ | -| v2.11.2 | v0.7.2 | ✓ | ✓ | -| v2.11.1 | v0.7.1 | ✓ | ✓ | -| v2.11.0 | v0.7.0 | ✗ | ✓ | +| v2.12.0 | v0.8.0 | ✗ | ✓ | ## 为什么我们需要它? diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-webhook.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-webhook.md index 00d96b58a38..2feb30e26ea 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-webhook.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-webhook.md @@ -20,9 +20,7 @@ Rancher 将 Rancher-Webhook 作为单独的 deployment 和服务部署在 local | Rancher Version | Webhook Version | Availability in Prime | Availability in Community | |-----------------|-----------------|-----------------------|---------------------------| -| v2.11.2 | v0.7.2 | ✓ | ✓ | -| v2.11.1 | v0.7.1 | ✓ | ✓ | -| v2.11.0 | v0.7.0 | ✗ | ✓ | +| v2.12.0 | v0.8.0 | ✗ | ✓ | ## 为什么我们需要它? From 84a1457bc7a5f901e55eecadcbfe504f21b69fe7 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Fri, 25 Jul 2025 14:18:28 -0700 Subject: [PATCH 48/57] zh [2.12.0] CSP adapter update --- .../aws-cloud-marketplace/install-adapter.md | 5 +---- .../aws-cloud-marketplace/install-adapter.md | 4 +--- 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md b/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md index c7f256a912f..d9258538bfc 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md @@ -15,10 +15,7 @@ title: 安装 Adapter | Rancher 版本 | Adapter 版本 | |-----------------|:----------------:| -| v2.11.3 | v106.0.0+up6.0.0 | -| v2.11.2 | v106.0.0+up6.0.0 | -| v2.11.1 | v106.0.0+up6.0.0 | -| v2.11.0 | v106.0.0+up6.0.0 | +| v2.12.0 | 107.0.0+up7.0.0 | ## 1. 获取对 Local 集群的访问权限 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md index e883536a6ad..d9258538bfc 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/integrations-in-rancher/cloud-marketplace/aws-cloud-marketplace/install-adapter.md @@ -15,9 +15,7 @@ title: 安装 Adapter | Rancher 版本 | Adapter 版本 | |-----------------|:----------------:| -| v2.11.2 | v106.0.0+up6.0.0 | -| v2.11.1 | v106.0.0+up6.0.0 | -| v2.11.0 | v106.0.0+up6.0.0 | +| v2.12.0 | 107.0.0+up7.0.0 | ## 1. 获取对 Local 集群的访问权限 From c102695ecb638221ef8d8a0ce0c1d849a9cbd29f Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Fri, 25 Jul 2025 14:19:57 -0700 Subject: [PATCH 49/57] zh [2.12.0] depreciated features update --- .../current/faq/deprecated-features.md | 5 +---- .../version-2.12/faq/deprecated-features.md | 4 +--- 2 files changed, 2 insertions(+), 7 deletions(-) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/faq/deprecated-features.md b/i18n/zh/docusaurus-plugin-content-docs/current/faq/deprecated-features.md index 18fd39792ad..373fb856af3 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/faq/deprecated-features.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/faq/deprecated-features.md @@ -16,10 +16,7 @@ Rancher 将在 GitHub 上发布的 Rancher 的[发版说明](https://github.com/ | Patch 版本 | 发布时间 | | ----------------------------------------------------------------- | ------------------ | -| [2.11.3](https://github.com/rancher/rancher/releases/tag/v2.11.2) | 2025 年 6 月 25 日 | -| [2.11.2](https://github.com/rancher/rancher/releases/tag/v2.11.2) | 2025 年 5 月 22 日 | -| [2.11.1](https://github.com/rancher/rancher/releases/tag/v2.11.1) | 2025 年 4 月 24 日 | -| [2.11.0](https://github.com/rancher/rancher/releases/tag/v2.11.0) | 2025 年 3 月 31 日 | +| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | 2025 年 7 月 31 日 | ## 当一个功能被标记为弃用我可以得到什么样的预期? diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/faq/deprecated-features.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/faq/deprecated-features.md index e604fde7528..373fb856af3 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/faq/deprecated-features.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/faq/deprecated-features.md @@ -16,9 +16,7 @@ Rancher 将在 GitHub 上发布的 Rancher 的[发版说明](https://github.com/ | Patch 版本 | 发布时间 | | ----------------------------------------------------------------- | ------------------ | -| [2.11.2](https://github.com/rancher/rancher/releases/tag/v2.11.2) | 2025 年 5 月 22 日 | -| [2.11.1](https://github.com/rancher/rancher/releases/tag/v2.11.1) | 2025 年 4 月 24 日 | -| [2.11.0](https://github.com/rancher/rancher/releases/tag/v2.11.0) | 2025 年 3 月 31 日 | +| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | 2025 年 7 月 31 日 | ## 当一个功能被标记为弃用我可以得到什么样的预期? From 84eed79b156dd9392a8569a46c827230d44dc301 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Fri, 25 Jul 2025 14:24:46 -0700 Subject: [PATCH 50/57] Update CNI popularity table --- shared-files/_cni-popularity.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/shared-files/_cni-popularity.md b/shared-files/_cni-popularity.md index 24d97463128..fbbcadbadc2 100644 --- a/shared-files/_cni-popularity.md +++ b/shared-files/_cni-popularity.md @@ -1,10 +1,10 @@ -The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity levels. This data was collected in June 2025. +The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity levels. This data was collected in July 2025. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | | Canal | https://github.com/projectcalico/canal | 720 | 99 | 20 | -| Flannel | https://github.com/flannel-io/flannel | 9.2k | 2.9k | 239 | -| Calico | https://github.com/projectcalico/calico | 6.5k | 1.4k | 378 | +| Flannel | https://github.com/flannel-io/flannel | 9.2k | 2.9k | 242 | +| Calico | https://github.com/projectcalico/calico | 6.7k | 1.5k | 380 | | Weave | https://github.com/weaveworks/weave | 6.6k | 681 | 84 | -| Cilium | https://github.com/cilium/cilium | 21.9k | 3.3k | 948 | +| Cilium | https://github.com/cilium/cilium | 21.1k | 3.3k | 959 | From e9311bb9286b166ca14d9f7c87027795dd87d366 Mon Sep 17 00:00:00 2001 From: Peter Matseykanets Date: Fri, 25 Jul 2025 15:04:29 -0400 Subject: [PATCH 51/57] Fix Kubeconfigs example workflows page --- docs/api/workflows/kubeconfigs.md | 3 +-- versioned_docs/version-2.12/api/workflows/kubeconfigs.md | 3 +-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/docs/api/workflows/kubeconfigs.md b/docs/api/workflows/kubeconfigs.md index e5e92a391a9..518080ed4c8 100644 --- a/docs/api/workflows/kubeconfigs.md +++ b/docs/api/workflows/kubeconfigs.md @@ -29,8 +29,7 @@ kubectl patch feature ext-kubeconfigs -p '{"spec":{"value":false}}' ## Creating a Kubeconfig -Admins can delete any Kubeconfig, while regular users can only delete their own. When a Kubeconfig is deleted, the kubeconfig tokens are also deleted. -E.g. using a service account `system:admin` will lead to the following error: +Only a **valid and active** Rancher user can create a Kubeconfig. E.g. trying to create a Kubeconfig using `system:admin` service account will lead to an error: ```bash kubectl create -o jsonpath='{.status.value}' -f -< Date: Fri, 25 Jul 2025 15:21:50 -0400 Subject: [PATCH 52/57] Address review feedback Co-authored-by: Lucas Saintarbor --- docs/api/workflows/kubeconfigs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/workflows/kubeconfigs.md b/docs/api/workflows/kubeconfigs.md index 518080ed4c8..ee76b4d8d8a 100644 --- a/docs/api/workflows/kubeconfigs.md +++ b/docs/api/workflows/kubeconfigs.md @@ -29,7 +29,7 @@ kubectl patch feature ext-kubeconfigs -p '{"spec":{"value":false}}' ## Creating a Kubeconfig -Only a **valid and active** Rancher user can create a Kubeconfig. E.g. trying to create a Kubeconfig using `system:admin` service account will lead to an error: +Only a **valid and active** Rancher user can create a Kubeconfig. For example, trying to create a Kubeconfig using a `system:admin` service account will lead to an error: ```bash kubectl create -o jsonpath='{.status.value}' -f -< Date: Fri, 25 Jul 2025 15:22:32 -0400 Subject: [PATCH 53/57] Address review feedback Co-authored-by: Lucas Saintarbor --- versioned_docs/version-2.12/api/workflows/kubeconfigs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/versioned_docs/version-2.12/api/workflows/kubeconfigs.md b/versioned_docs/version-2.12/api/workflows/kubeconfigs.md index 518080ed4c8..ee76b4d8d8a 100644 --- a/versioned_docs/version-2.12/api/workflows/kubeconfigs.md +++ b/versioned_docs/version-2.12/api/workflows/kubeconfigs.md @@ -29,7 +29,7 @@ kubectl patch feature ext-kubeconfigs -p '{"spec":{"value":false}}' ## Creating a Kubeconfig -Only a **valid and active** Rancher user can create a Kubeconfig. E.g. trying to create a Kubeconfig using `system:admin` service account will lead to an error: +Only a **valid and active** Rancher user can create a Kubeconfig. For example, trying to create a Kubeconfig using a `system:admin` service account will lead to an error: ```bash kubectl create -o jsonpath='{.status.value}' -f -< Date: Mon, 28 Jul 2025 09:51:34 -0700 Subject: [PATCH 54/57] Update docs/reference-guides/rancher-security/rancher-security.md Co-authored-by: Billy Tat --- docs/reference-guides/rancher-security/rancher-security.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference-guides/rancher-security/rancher-security.md b/docs/reference-guides/rancher-security/rancher-security.md index 0a891b55e09..05315341d2e 100644 --- a/docs/reference-guides/rancher-security/rancher-security.md +++ b/docs/reference-guides/rancher-security/rancher-security.md @@ -67,7 +67,7 @@ Each version of the hardening guide is intended to be used with specific version The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. -Because Rancher installs Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/). +This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/). Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark. From 3b1c69bd2a2f4cad1a49b5a1ae4cf5085c92a3dd Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Mon, 28 Jul 2025 10:35:01 -0700 Subject: [PATCH 55/57] Update important files (config and kubeconfig) for rke2/k3s --- .../communicating-with-downstream-user-clusters.md | 3 ++- .../communicating-with-downstream-user-clusters.md | 3 ++- .../communicating-with-downstream-user-clusters.md | 4 +++- .../communicating-with-downstream-user-clusters.md | 3 ++- 4 files changed, 9 insertions(+), 4 deletions(-) diff --git a/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 18abbf631b6..f42e97da652 100644 --- a/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/docs/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -178,7 +178,8 @@ If you see an error related to "impersonation" in the UI, pay close attention to The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster: -- `kube_config_rancher-cluster.yml`: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. +- `config.yaml`: The RKE2 and K3s cluster configuration file. +- `rke2.yaml` or `k3s.yaml`: The Kubeconfig file for your RKE2 or K3s cluster. This file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) documentation. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 609bbbf60ff..21326c85789 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -81,7 +81,8 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中 维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件: -- `kube_config_rancher-cluster.yml`:集群的 Kubeconfig 文件,包含完全访问集群的凭证。如果 Rancher 出现问题时,你可以使用此文件认证由 Rancher 启动的 Kubernetes 集群。 +- `config.yaml`: The RKE2 and K3s cluster configuration file. +- `rke2.yaml` or `k3s.yaml`: The Kubeconfig file for your RKE2 or K3s cluster. This file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. 有关在没有 Rancher 认证代理和其他配置选项的情况下连接到集群的更多信息,请参见 [kubeconfig 文件](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md)。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 609bbbf60ff..d08db8494df 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -81,7 +81,9 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中 维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件: -- `kube_config_rancher-cluster.yml`:集群的 Kubeconfig 文件,包含完全访问集群的凭证。如果 Rancher 出现问题时,你可以使用此文件认证由 Rancher 启动的 Kubernetes 集群。 + +- `config.yaml`: The RKE2 and K3s cluster configuration file. +- `rke2.yaml` or `k3s.yaml`: The Kubeconfig file for your RKE2 or K3s cluster. This file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. 有关在没有 Rancher 认证代理和其他配置选项的情况下连接到集群的更多信息,请参见 [kubeconfig 文件](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md)。 diff --git a/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md b/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md index 18abbf631b6..f42e97da652 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md @@ -178,7 +178,8 @@ If you see an error related to "impersonation" in the UI, pay close attention to The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster: -- `kube_config_rancher-cluster.yml`: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. +- `config.yaml`: The RKE2 and K3s cluster configuration file. +- `rke2.yaml` or `k3s.yaml`: The Kubeconfig file for your RKE2 or K3s cluster. This file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) documentation. From 1b087957acae2a40f89d4ca052a0f3136bc93310 Mon Sep 17 00:00:00 2001 From: LucasSaintarbor Date: Mon, 28 Jul 2025 15:32:02 -0700 Subject: [PATCH 56/57] Remove docker containers statement on Rancher Security Guides page --- .../reference-guides/rancher-security/rancher-security.md | 2 +- .../reference-guides/rancher-security/rancher-security.md | 2 +- .../reference-guides/rancher-security/rancher-security.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md index c1434b4d54f..e2ed8a024a6 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md @@ -67,7 +67,7 @@ Rancher 加固指南基于 Date: Mon, 28 Jul 2025 16:19:36 -0700 Subject: [PATCH 57/57] Add back removed rke1-hardening-guide folder --- .../rke1-hardening-guide.md | 2 +- ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 5729 +++++++++-------- .../rke1-hardening-guide.md | 2 +- ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 5727 ++++++++-------- .../rke1-hardening-guide.md | 2 +- ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 5727 ++++++++-------- .../rke1-hardening-guide.md | 2 +- ...ide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md | 5729 +++++++++-------- 8 files changed, 11462 insertions(+), 11458 deletions(-) diff --git a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md index 35ecd76ead2..afa5dc0fef1 100644 --- a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md +++ b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md @@ -510,4 +510,4 @@ rancher_kubernetes_engine_config: ## Conclusion -If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster. \ No newline at end of file +If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster. diff --git a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md index e8bf71f7c78..ac002a20369 100644 --- a/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md +++ b/docs/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md @@ -1,2864 +1,2865 @@ ---- -title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 ---- - - - - - - - -This document is a companion to the [RKE Hardening Guide](rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark. - - -This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes: - -| Rancher Version | CIS Benchmark Version | Kubernetes Version | -|-----------------|-----------------------|--------------------| -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | - -This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`. - -This document is for Rancher operators, security teams, auditors and decision makers. - -For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). - -## Testing Methodology - -Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. - -Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. - -:::note - -This guide only covers `automated` (previously called `scored`) tests. - -::: - -### Controls - -## 1.1 Control Plane Node Configuration Files -### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the -control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a -``` - -**Expected Result**: - -```console -'permissions' is present -``` - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 -``` - -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd -``` - -### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown -R root:root /etc/kubernetes/pki/ - -**Audit Script:** `check_files_owner_in_dir.sh` - -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` - -**Audit Execution:** - -```bash -./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---anonymous-auth=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the --token-auth-file= parameter. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the `DenyServiceExternalIPs` -from enabled admission plugins. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= -When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'Node' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, -for example `--authorization-mode=Node,RBAC`. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'AlwaysPullImages' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--secure-port' is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.17 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example, ---audit-log-path=/var/log/apiserver/audit.log - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxage parameter to 30 -or as an appropriate number of days, for example, ---audit-log-maxage=30 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxage' is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. For example, ---audit-log-maxbackup=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxbackup' is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB, --audit-log-maxsize=100 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxsize' is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, --request-timeout=300s - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --service-account-key-file parameter -to the public key file for service accounts. For example, ---service-account-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the client certificate authority file. ---client-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate authority file parameter. ---etcd-cafile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --encryption-provider-config parameter to the path of that file. -For example, --encryption-provider-config= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - -**Audit:** - -```bash -ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi -``` - -**Expected Result**: - -```console -'provider' is present -``` - -### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, -TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, -TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, -TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example, --terminated-pod-gc-threshold=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node to set the below parameter. ---use-service-account-credentials=true - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--use-service-account-credentials' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true -Cluster provisioned by RKE handles certificate rotation directly through RKE. - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -## 2 Etcd Node Configuration -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.1.2 Service account token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of service account tokens. - -### 3.1.3 Bootstrap token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of bootstrap tokens. - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - - -**Result:** warn - -**Remediation:** -Review the audit policy provided for the cluster and ensure that it covers -at least the following areas, -- Access to Secrets managed by the cluster. Care should be taken to only - log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in - order to avoid risk of logging sensitive data. -- Modification of Pod and Deployment objects. -- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. -For most requests, minimally logging at the Metadata level is recommended -(the most basic level of logging). - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. - All configuration is passed in as arguments at container run time. - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) - - -**Result:** fail - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 600 - -**Audit:** - -```bash -stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=644 -``` - -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - -**Audit:** - -```bash -stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 600 /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -## 4.2 Kubelet -### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to -`false`. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -`--anonymous-auth=false` -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--read-only-port' is equal to '0' OR '--read-only-port' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - -### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--event-qps' is greater or equal to 0 OR '--event-qps' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `tlsCertFile` to the location -of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites` to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) - - -**Result:** warn - -**Remediation:** -Decide on an appropriate level for this parameter and set it, -either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--pod-max-pids' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - -### 5.1.2 Minimize access to secrets (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to Secret objects in the cluster. - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - -### 5.1.4 Minimize access to create pods (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - -### 5.1.5 Ensure that default service accounts are not actively used. (Manual) - - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - -**Audit Script:** `check_for_default_sa.sh` - -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" - -``` - -**Audit Execution:** - -```bash -./check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - -### 5.1.7 Avoid use of system:masters group (Manual) - - -**Result:** warn - -**Remediation:** -Remove the system:masters group from all users in the cluster. - -### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove the impersonate, bind and escalate rights from subjects. - -### 5.1.9 Minimize access to create persistent volumes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to PersistentVolume objects in the cluster. - -### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the proxy sub-resource of node objects. - -### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. - -### 5.1.12 Minimize access to webhook configuration objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects - -### 5.1.13 Minimize access to the service account token creation (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the token sub-resource of serviceaccount objects. - -## 5.2 Pod Security Standards -### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that either Pod Security Admission or an external policy control system is in place -for every namespace which contains user workloads. - -### 5.2.2 Minimize the admission of privileged containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of privileged containers. - -### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostPID` containers. - -### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostIPC` containers. - -### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostNetwork` containers. - -### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. - -### 5.2.7 Minimize the admission of root containers (Manual) - - -**Result:** warn - -**Remediation:** -Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` -or `MustRunAs` with the range of UIDs not including 0, is set. - -### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with the `NET_RAW` capability. - -### 5.2.9 Minimize the admission of containers with added capabilities (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that `allowedCapabilities` is not present in policies for the cluster unless -it is set to an empty array. - -### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) - - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications running on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - -### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. - -### 5.2.12 Minimize the admission of HostPath volumes (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `hostPath` volumes. - -### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers which use `hostPort` sections. - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) - - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - -### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - -## 5.4 Secrets Management -### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) - - -**Result:** warn - -**Remediation:** -If possible, rewrite application code to read Secrets from mounted secret files, rather than -from environment variables. - -### 5.4.2 Consider external secret storage (Manual) - - -**Result:** warn - -**Remediation:** -Refer to the Secrets management options offered by your cloud provider or a third-party -secrets management solution. - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) - - -**Result:** warn - -**Remediation:** -Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. -An example is as below: - securityContext: - seccompProfile: - type: RuntimeDefault - -### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a -suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker -Containers. - -### 5.7.4 The default namespace should not be used (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. +--- +title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 +--- + + + + + + + +This document is a companion to the [RKE Hardening Guide](rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark. + + +This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes: + +| Rancher Version | CIS Benchmark Version | Kubernetes Version | +|-----------------|-----------------------|--------------------| +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | + +This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`. + +This document is for Rancher operators, security teams, auditors and decision makers. + +For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). + +## Testing Methodology + +Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. + +Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. + +:::note + +This guide only covers `automated` (previously called `scored`) tests. + +::: + +### Controls + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a +``` + +**Expected Result**: + +```console +'permissions' is present +``` + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit:** + +```bash +stat -c %a /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +**Audit:** + +```bash +stat -c %U:%G /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'etcd:etcd' is present +``` + +**Returned Value**: + +```console +etcd:etcd +``` + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit Script:** `check_files_owner_in_dir.sh` + +```bash +#!/usr/bin/env bash + +# This script is used to ensure the owner is set to root:root for +# the given directory and all the files in it +# +# inputs: +# $1 = /full/path/to/directory +# +# outputs: +# true/false + +INPUT_DIR=$1 + +if [[ "${INPUT_DIR}" == "" ]]; then + echo "false" + exit +fi + +if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then + echo "false" + exit +fi + +statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) +while read -r statInfoLine; do + f=$(echo ${statInfoLine} | cut -d' ' -f1) + p=$(echo ${statInfoLine} | cut -d' ' -f2) + + if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then + if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "root:root" ]]; then + echo "false" + exit + fi + fi +done <<< "${statInfoLines}" + + +echo "true" +exit + +``` + +**Audit Execution:** + +```bash +./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= +When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.17 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi +``` + +**Expected Result**: + +```console +'provider' is present +``` + +### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true +Cluster provisioned by RKE handles certificate rotation directly through RKE. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file= +--key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--cert-file' is present AND '--key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameters. +--peer-client-file= +--peer-key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameter. +--trusted-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--trusted-ca-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of bootstrap tokens. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Automated) + + +**Result:** pass + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-policy-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +All configuration is passed in as arguments at container run time. + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. + All configuration is passed in as arguments at container run time. + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + + +**Result:** fail + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 600 + +**Audit:** + +```bash +stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root + +**Audit:** + +```bash +stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 600 /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--event-qps' is greater or equal to 0 OR '--event-qps' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** warn + +**Remediation:** +Decide on an appropriate level for this parameter and set it, +either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--pod-max-pids' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** pass + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +**Audit Script:** `check_for_default_sa.sh` + +```bash +#!/bin/bash + +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + +count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) +if [[ ${count_sa} -gt 0 ]]; then + echo "false" + exit +fi + +for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") +do + for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') + do + read kind name <<<$(IFS=","; echo $result) + resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) + if [[ ${resource_count} -gt 0 ]]; then + echo "false" + exit + fi + done +done + + +echo "true" + +``` + +**Audit Execution:** + +```bash +./check_for_default_sa.sh +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the proxy sub-resource of node objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the token sub-resource of serviceaccount objects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Manual) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. + diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md index 62a27bd3e86..eaffdb72d92 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md @@ -513,4 +513,4 @@ rancher_kubernetes_engine_config: ## 结论 -如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。 \ No newline at end of file +如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md index 1a3eb88ed98..cb3a548a8b1 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md @@ -1,2863 +1,2864 @@ ---- -title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 ---- - - - - - - - -本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。 - -本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes: - -| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | -|-----------------|-----------------------|--------------------| -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | - -本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。 - -本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。 - -有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。 - -## 测试方法 - -Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。 - -在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。 - -:::note - -本指南仅涵盖 `automated`(之前称为 `scored`)测试。 - -::: - -### Controls - -## 1.1 Control Plane Node Configuration Files -### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the -control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a -``` - -**Expected Result**: - -```console -'permissions' is present -``` - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 -``` - -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd -``` - -### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown -R root:root /etc/kubernetes/pki/ - -**Audit Script:** `check_files_owner_in_dir.sh` - -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` - -**Audit Execution:** - -```bash -./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---anonymous-auth=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the --token-auth-file= parameter. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the `DenyServiceExternalIPs` -from enabled admission plugins. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= -When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'Node' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, -for example `--authorization-mode=Node,RBAC`. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'AlwaysPullImages' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--secure-port' is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.17 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example, ---audit-log-path=/var/log/apiserver/audit.log - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxage parameter to 30 -or as an appropriate number of days, for example, ---audit-log-maxage=30 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxage' is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. For example, ---audit-log-maxbackup=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxbackup' is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB, --audit-log-maxsize=100 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxsize' is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, --request-timeout=300s - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --service-account-key-file parameter -to the public key file for service accounts. For example, ---service-account-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the client certificate authority file. ---client-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate authority file parameter. ---etcd-cafile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --encryption-provider-config parameter to the path of that file. -For example, --encryption-provider-config= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - -**Audit:** - -```bash -ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi -``` - -**Expected Result**: - -```console -'provider' is present -``` - -### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, -TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, -TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, -TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example, --terminated-pod-gc-threshold=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node to set the below parameter. ---use-service-account-credentials=true - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--use-service-account-credentials' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true -Cluster provisioned by RKE handles certificate rotation directly through RKE. - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -## 2 Etcd Node Configuration -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.1.2 Service account token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of service account tokens. - -### 3.1.3 Bootstrap token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of bootstrap tokens. - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - - -**Result:** warn - -**Remediation:** -Review the audit policy provided for the cluster and ensure that it covers -at least the following areas, -- Access to Secrets managed by the cluster. Care should be taken to only - log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in - order to avoid risk of logging sensitive data. -- Modification of Pod and Deployment objects. -- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. -For most requests, minimally logging at the Metadata level is recommended -(the most basic level of logging). - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. - All configuration is passed in as arguments at container run time. - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) - - -**Result:** fail - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 600 - -**Audit:** - -```bash -stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=644 -``` - -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - -**Audit:** - -```bash -stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 600 /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -## 4.2 Kubelet -### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to -`false`. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -`--anonymous-auth=false` -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--read-only-port' is equal to '0' OR '--read-only-port' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - -### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--event-qps' is greater or equal to 0 OR '--event-qps' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `tlsCertFile` to the location -of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites` to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) - - -**Result:** warn - -**Remediation:** -Decide on an appropriate level for this parameter and set it, -either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--pod-max-pids' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - -### 5.1.2 Minimize access to secrets (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to Secret objects in the cluster. - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - -### 5.1.4 Minimize access to create pods (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - -### 5.1.5 Ensure that default service accounts are not actively used. (Manual) - - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - -**Audit Script:** `check_for_default_sa.sh` - -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" - -``` - -**Audit Execution:** - -```bash -./check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - -### 5.1.7 Avoid use of system:masters group (Manual) - - -**Result:** warn - -**Remediation:** -Remove the system:masters group from all users in the cluster. - -### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove the impersonate, bind and escalate rights from subjects. - -### 5.1.9 Minimize access to create persistent volumes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to PersistentVolume objects in the cluster. - -### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the proxy sub-resource of node objects. - -### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. - -### 5.1.12 Minimize access to webhook configuration objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects - -### 5.1.13 Minimize access to the service account token creation (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the token sub-resource of serviceaccount objects. - -## 5.2 Pod Security Standards -### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that either Pod Security Admission or an external policy control system is in place -for every namespace which contains user workloads. - -### 5.2.2 Minimize the admission of privileged containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of privileged containers. - -### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostPID` containers. - -### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostIPC` containers. - -### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostNetwork` containers. - -### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. - -### 5.2.7 Minimize the admission of root containers (Manual) - - -**Result:** warn - -**Remediation:** -Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` -or `MustRunAs` with the range of UIDs not including 0, is set. - -### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with the `NET_RAW` capability. - -### 5.2.9 Minimize the admission of containers with added capabilities (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that `allowedCapabilities` is not present in policies for the cluster unless -it is set to an empty array. - -### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) - - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications running on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - -### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. - -### 5.2.12 Minimize the admission of HostPath volumes (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `hostPath` volumes. - -### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers which use `hostPort` sections. - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) - - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - -### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - -## 5.4 Secrets Management -### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) - - -**Result:** warn - -**Remediation:** -If possible, rewrite application code to read Secrets from mounted secret files, rather than -from environment variables. - -### 5.4.2 Consider external secret storage (Manual) - - -**Result:** warn - -**Remediation:** -Refer to the Secrets management options offered by your cloud provider or a third-party -secrets management solution. - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) - - -**Result:** warn - -**Remediation:** -Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. -An example is as below: - securityContext: - seccompProfile: - type: RuntimeDefault - -### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a -suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker -Containers. - -### 5.7.4 The default namespace should not be used (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. +--- +title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 +--- + + + + + + + +本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。 + +本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes: + +| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | +|-----------------|-----------------------|--------------------| +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | + +本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。 + +本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。 + +有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。 + +## 测试方法 + +Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。 + +在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。 + +:::note + +本指南仅涵盖 `automated`(之前称为 `scored`)测试。 + +::: + +### Controls + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a +``` + +**Expected Result**: + +```console +'permissions' is present +``` + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit:** + +```bash +stat -c %a /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +**Audit:** + +```bash +stat -c %U:%G /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'etcd:etcd' is present +``` + +**Returned Value**: + +```console +etcd:etcd +``` + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit Script:** `check_files_owner_in_dir.sh` + +```bash +#!/usr/bin/env bash + +# This script is used to ensure the owner is set to root:root for +# the given directory and all the files in it +# +# inputs: +# $1 = /full/path/to/directory +# +# outputs: +# true/false + +INPUT_DIR=$1 + +if [[ "${INPUT_DIR}" == "" ]]; then + echo "false" + exit +fi + +if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then + echo "false" + exit +fi + +statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) +while read -r statInfoLine; do + f=$(echo ${statInfoLine} | cut -d' ' -f1) + p=$(echo ${statInfoLine} | cut -d' ' -f2) + + if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then + if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "root:root" ]]; then + echo "false" + exit + fi + fi +done <<< "${statInfoLines}" + + +echo "true" +exit + +``` + +**Audit Execution:** + +```bash +./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= +When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.17 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi +``` + +**Expected Result**: + +```console +'provider' is present +``` + +### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true +Cluster provisioned by RKE handles certificate rotation directly through RKE. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file= +--key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--cert-file' is present AND '--key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameters. +--peer-client-file= +--peer-key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameter. +--trusted-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--trusted-ca-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of bootstrap tokens. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Automated) + + +**Result:** pass + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-policy-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +All configuration is passed in as arguments at container run time. + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. + All configuration is passed in as arguments at container run time. + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + + +**Result:** fail + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 600 + +**Audit:** + +```bash +stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root + +**Audit:** + +```bash +stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 600 /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--event-qps' is greater or equal to 0 OR '--event-qps' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** warn + +**Remediation:** +Decide on an appropriate level for this parameter and set it, +either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--pod-max-pids' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** pass + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +**Audit Script:** `check_for_default_sa.sh` + +```bash +#!/bin/bash + +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + +count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) +if [[ ${count_sa} -gt 0 ]]; then + echo "false" + exit +fi + +for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") +do + for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') + do + read kind name <<<$(IFS=","; echo $result) + resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) + if [[ ${resource_count} -gt 0 ]]; then + echo "false" + exit + fi + done +done + + +echo "true" + +``` + +**Audit Execution:** + +```bash +./check_for_default_sa.sh +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the proxy sub-resource of node objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the token sub-resource of serviceaccount objects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Manual) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. + diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md index 62a27bd3e86..eaffdb72d92 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md @@ -513,4 +513,4 @@ rancher_kubernetes_engine_config: ## 结论 -如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。 \ No newline at end of file +如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md index 1a3eb88ed98..cb3a548a8b1 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md @@ -1,2863 +1,2864 @@ ---- -title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 ---- - - - - - - - -本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。 - -本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes: - -| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | -|-----------------|-----------------------|--------------------| -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | - -本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。 - -本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。 - -有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。 - -## 测试方法 - -Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。 - -在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。 - -:::note - -本指南仅涵盖 `automated`(之前称为 `scored`)测试。 - -::: - -### Controls - -## 1.1 Control Plane Node Configuration Files -### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the -control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a -``` - -**Expected Result**: - -```console -'permissions' is present -``` - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 -``` - -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd -``` - -### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown -R root:root /etc/kubernetes/pki/ - -**Audit Script:** `check_files_owner_in_dir.sh` - -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` - -**Audit Execution:** - -```bash -./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---anonymous-auth=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the --token-auth-file= parameter. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the `DenyServiceExternalIPs` -from enabled admission plugins. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= -When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'Node' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, -for example `--authorization-mode=Node,RBAC`. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'AlwaysPullImages' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--secure-port' is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.17 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example, ---audit-log-path=/var/log/apiserver/audit.log - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxage parameter to 30 -or as an appropriate number of days, for example, ---audit-log-maxage=30 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxage' is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. For example, ---audit-log-maxbackup=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxbackup' is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB, --audit-log-maxsize=100 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxsize' is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, --request-timeout=300s - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --service-account-key-file parameter -to the public key file for service accounts. For example, ---service-account-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the client certificate authority file. ---client-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate authority file parameter. ---etcd-cafile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --encryption-provider-config parameter to the path of that file. -For example, --encryption-provider-config= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - -**Audit:** - -```bash -ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi -``` - -**Expected Result**: - -```console -'provider' is present -``` - -### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, -TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, -TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, -TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example, --terminated-pod-gc-threshold=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node to set the below parameter. ---use-service-account-credentials=true - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--use-service-account-credentials' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true -Cluster provisioned by RKE handles certificate rotation directly through RKE. - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -## 2 Etcd Node Configuration -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.1.2 Service account token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of service account tokens. - -### 3.1.3 Bootstrap token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of bootstrap tokens. - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - - -**Result:** warn - -**Remediation:** -Review the audit policy provided for the cluster and ensure that it covers -at least the following areas, -- Access to Secrets managed by the cluster. Care should be taken to only - log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in - order to avoid risk of logging sensitive data. -- Modification of Pod and Deployment objects. -- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. -For most requests, minimally logging at the Metadata level is recommended -(the most basic level of logging). - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. - All configuration is passed in as arguments at container run time. - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) - - -**Result:** fail - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 600 - -**Audit:** - -```bash -stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=644 -``` - -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - -**Audit:** - -```bash -stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 600 /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -## 4.2 Kubelet -### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to -`false`. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -`--anonymous-auth=false` -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--read-only-port' is equal to '0' OR '--read-only-port' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - -### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--event-qps' is greater or equal to 0 OR '--event-qps' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `tlsCertFile` to the location -of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites` to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) - - -**Result:** warn - -**Remediation:** -Decide on an appropriate level for this parameter and set it, -either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--pod-max-pids' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - -### 5.1.2 Minimize access to secrets (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to Secret objects in the cluster. - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - -### 5.1.4 Minimize access to create pods (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - -### 5.1.5 Ensure that default service accounts are not actively used. (Manual) - - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - -**Audit Script:** `check_for_default_sa.sh` - -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" - -``` - -**Audit Execution:** - -```bash -./check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - -### 5.1.7 Avoid use of system:masters group (Manual) - - -**Result:** warn - -**Remediation:** -Remove the system:masters group from all users in the cluster. - -### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove the impersonate, bind and escalate rights from subjects. - -### 5.1.9 Minimize access to create persistent volumes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to PersistentVolume objects in the cluster. - -### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the proxy sub-resource of node objects. - -### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. - -### 5.1.12 Minimize access to webhook configuration objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects - -### 5.1.13 Minimize access to the service account token creation (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the token sub-resource of serviceaccount objects. - -## 5.2 Pod Security Standards -### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that either Pod Security Admission or an external policy control system is in place -for every namespace which contains user workloads. - -### 5.2.2 Minimize the admission of privileged containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of privileged containers. - -### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostPID` containers. - -### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostIPC` containers. - -### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostNetwork` containers. - -### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. - -### 5.2.7 Minimize the admission of root containers (Manual) - - -**Result:** warn - -**Remediation:** -Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` -or `MustRunAs` with the range of UIDs not including 0, is set. - -### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with the `NET_RAW` capability. - -### 5.2.9 Minimize the admission of containers with added capabilities (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that `allowedCapabilities` is not present in policies for the cluster unless -it is set to an empty array. - -### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) - - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications running on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - -### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. - -### 5.2.12 Minimize the admission of HostPath volumes (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `hostPath` volumes. - -### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers which use `hostPort` sections. - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) - - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - -### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - -## 5.4 Secrets Management -### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) - - -**Result:** warn - -**Remediation:** -If possible, rewrite application code to read Secrets from mounted secret files, rather than -from environment variables. - -### 5.4.2 Consider external secret storage (Manual) - - -**Result:** warn - -**Remediation:** -Refer to the Secrets management options offered by your cloud provider or a third-party -secrets management solution. - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) - - -**Result:** warn - -**Remediation:** -Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. -An example is as below: - securityContext: - seccompProfile: - type: RuntimeDefault - -### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a -suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker -Containers. - -### 5.7.4 The default namespace should not be used (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. +--- +title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 +--- + + + + + + + +本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。 + +本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes: + +| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 | +|-----------------|-----------------------|--------------------| +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | + +本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。 + +本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。 + +有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。 + +## 测试方法 + +Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。 + +在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。 + +:::note + +本指南仅涵盖 `automated`(之前称为 `scored`)测试。 + +::: + +### Controls + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a +``` + +**Expected Result**: + +```console +'permissions' is present +``` + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit:** + +```bash +stat -c %a /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +**Audit:** + +```bash +stat -c %U:%G /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'etcd:etcd' is present +``` + +**Returned Value**: + +```console +etcd:etcd +``` + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit Script:** `check_files_owner_in_dir.sh` + +```bash +#!/usr/bin/env bash + +# This script is used to ensure the owner is set to root:root for +# the given directory and all the files in it +# +# inputs: +# $1 = /full/path/to/directory +# +# outputs: +# true/false + +INPUT_DIR=$1 + +if [[ "${INPUT_DIR}" == "" ]]; then + echo "false" + exit +fi + +if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then + echo "false" + exit +fi + +statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) +while read -r statInfoLine; do + f=$(echo ${statInfoLine} | cut -d' ' -f1) + p=$(echo ${statInfoLine} | cut -d' ' -f2) + + if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then + if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "root:root" ]]; then + echo "false" + exit + fi + fi +done <<< "${statInfoLines}" + + +echo "true" +exit + +``` + +**Audit Execution:** + +```bash +./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= +When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.17 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi +``` + +**Expected Result**: + +```console +'provider' is present +``` + +### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true +Cluster provisioned by RKE handles certificate rotation directly through RKE. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file= +--key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--cert-file' is present AND '--key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameters. +--peer-client-file= +--peer-key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameter. +--trusted-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--trusted-ca-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of bootstrap tokens. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Automated) + + +**Result:** pass + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-policy-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +All configuration is passed in as arguments at container run time. + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. + All configuration is passed in as arguments at container run time. + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + + +**Result:** fail + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 600 + +**Audit:** + +```bash +stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root + +**Audit:** + +```bash +stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 600 /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--event-qps' is greater or equal to 0 OR '--event-qps' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** warn + +**Remediation:** +Decide on an appropriate level for this parameter and set it, +either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--pod-max-pids' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** pass + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +**Audit Script:** `check_for_default_sa.sh` + +```bash +#!/bin/bash + +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + +count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) +if [[ ${count_sa} -gt 0 ]]; then + echo "false" + exit +fi + +for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") +do + for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') + do + read kind name <<<$(IFS=","; echo $result) + resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) + if [[ ${resource_count} -gt 0 ]]; then + echo "false" + exit + fi + done +done + + +echo "true" + +``` + +**Audit Execution:** + +```bash +./check_for_default_sa.sh +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the proxy sub-resource of node objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the token sub-resource of serviceaccount objects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Manual) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. + diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md index 35ecd76ead2..afa5dc0fef1 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md @@ -510,4 +510,4 @@ rancher_kubernetes_engine_config: ## Conclusion -If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster. \ No newline at end of file +If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster. diff --git a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md index e8bf71f7c78..ac002a20369 100644 --- a/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md +++ b/versioned_docs/version-2.12/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md @@ -1,2864 +1,2865 @@ ---- -title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 ---- - - - - - - - -This document is a companion to the [RKE Hardening Guide](rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark. - - -This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes: - -| Rancher Version | CIS Benchmark Version | Kubernetes Version | -|-----------------|-----------------------|--------------------| -| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | - -This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`. - -This document is for Rancher operators, security teams, auditors and decision makers. - -For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). - -## Testing Methodology - -Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. - -Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. - -:::note - -This guide only covers `automated` (previously called `scored`) tests. - -::: - -### Controls - -## 1.1 Control Plane Node Configuration Files -### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the -control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root /etc/kubernetes/manifests/etcd.yaml -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a -``` - -**Expected Result**: - -```console -'permissions' is present -``` - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root - -**Audit:** - -```bash -ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 -``` - -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the command 'ps -ef | grep etcd'. -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd -``` - -### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chmod 600 /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, chown root:root /etc/kubernetes/admin.conf -Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root scheduler -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chmod 600 controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown root:root controllermanager -Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -chown -R root:root /etc/kubernetes/pki/ - -**Audit Script:** `check_files_owner_in_dir.sh` - -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` - -**Audit Execution:** - -```bash -./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) - - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the control plane node. -For example, -find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + - -**Audit:** - -```bash -find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 -``` - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---anonymous-auth=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the --token-auth-file= parameter. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and remove the `DenyServiceExternalIPs` -from enabled admission plugins. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= -When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'Node' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, -for example `--authorization-mode=Node,RBAC`. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--authorization-mode' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'AlwaysPullImages' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--enable-admission-plugins' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--secure-port' is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.17 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example, ---audit-log-path=/var/log/apiserver/audit.log - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxage parameter to 30 -or as an appropriate number of days, for example, ---audit-log-maxage=30 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxage' is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. For example, ---audit-log-maxbackup=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxbackup' is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB, --audit-log-maxsize=100 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-maxsize' is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) - - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, --request-timeout=300s - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --service-account-key-file parameter -to the public key file for service accounts. For example, ---service-account-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the client certificate authority file. ---client-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the etcd certificate authority file parameter. ---etcd-cafile= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the --encryption-provider-config parameter to the path of that file. -For example, --encryption-provider-config= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - -**Audit:** - -```bash -ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi -``` - -**Expected Result**: - -```console -'provider' is present -``` - -### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) - - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the control plane node and set the below parameter. ---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, -TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, -TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, -TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, -TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, -TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example, --terminated-pod-gc-threshold=10 - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node to set the below parameter. ---use-service-account-credentials=true - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--use-service-account-credentials' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true -Cluster provisioned by RKE handles certificate rotation directly through RKE. - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true -``` - -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the control plane node and set the below parameter. ---profiling=false - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--profiling' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the control plane node and ensure the correct value for the --bind-address parameter - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true -``` - -## 2 Etcd Node Configuration -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present -``` - -**Returned Value**: - -```console -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ -``` - -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json -``` - -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.1.2 Service account token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of service account tokens. - -### 3.1.3 Bootstrap token authentication should not be used for users (Manual) - - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented -in place of bootstrap tokens. - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml -``` - -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - - -**Result:** warn - -**Remediation:** -Review the audit policy provided for the cluster and ensure that it covers -at least the following areas, -- Access to Secrets managed by the cluster. Care should be taken to only - log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in - order to avoid risk of logging sensitive data. -- Modification of Pod and Deployment objects. -- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. -For most requests, minimally logging at the Metadata level is recommended -(the most basic level of logging). - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. - All configuration is passed in as arguments at container run time. - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -permissions has permissions 600, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=600 -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is present -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) - - -**Result:** fail - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 600 - -**Audit:** - -```bash -stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -permissions has permissions 644, expected 600 or more restrictive -``` - -**Returned Value**: - -```console -permissions=644 -``` - -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - -**Audit:** - -```bash -stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root -``` - -### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 600 /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml -Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - -## 4.2 Kubelet -### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to -`false`. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -`--anonymous-auth=false` -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--anonymous-auth' is equal to 'false' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--authorization-mode' does not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--read-only-port' is equal to '0' OR '--read-only-port' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - -### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--event-qps' is greater or equal to 0 OR '--event-qps' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `tlsCertFile` to the location -of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example, -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service -Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites` to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) - - -**Result:** warn - -**Remediation:** -Decide on an appropriate level for this parameter and set it, -either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Audit Config:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' -``` - -**Expected Result**: - -```console -'--pod-max-pids' is present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - -### 5.1.2 Minimize access to secrets (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to Secret objects in the cluster. - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - -### 5.1.4 Minimize access to create pods (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - -### 5.1.5 Ensure that default service accounts are not actively used. (Manual) - - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - -**Audit Script:** `check_for_default_sa.sh` - -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" - -``` - -**Audit Execution:** - -```bash -./check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Returned Value**: - -```console -true -``` - -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - -### 5.1.7 Avoid use of system:masters group (Manual) - - -**Result:** warn - -**Remediation:** -Remove the system:masters group from all users in the cluster. - -### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove the impersonate, bind and escalate rights from subjects. - -### 5.1.9 Minimize access to create persistent volumes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove create access to PersistentVolume objects in the cluster. - -### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the proxy sub-resource of node objects. - -### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. - -### 5.1.12 Minimize access to webhook configuration objects (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects - -### 5.1.13 Minimize access to the service account token creation (Manual) - - -**Result:** warn - -**Remediation:** -Where possible, remove access to the token sub-resource of serviceaccount objects. - -## 5.2 Pod Security Standards -### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that either Pod Security Admission or an external policy control system is in place -for every namespace which contains user workloads. - -### 5.2.2 Minimize the admission of privileged containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of privileged containers. - -### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostPID` containers. - -### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostIPC` containers. - -### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of `hostNetwork` containers. - -### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. - -### 5.2.7 Minimize the admission of root containers (Manual) - - -**Result:** warn - -**Remediation:** -Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` -or `MustRunAs` with the range of UIDs not including 0, is set. - -### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with the `NET_RAW` capability. - -### 5.2.9 Minimize the admission of containers with added capabilities (Manual) - - -**Result:** warn - -**Remediation:** -Ensure that `allowedCapabilities` is not present in policies for the cluster unless -it is set to an empty array. - -### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) - - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications running on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - -### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. - -### 5.2.12 Minimize the admission of HostPath volumes (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers with `hostPath` volumes. - -### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) - - -**Result:** warn - -**Remediation:** -Add policies to each namespace in the cluster which has user workloads to restrict the -admission of containers which use `hostPort` sections. - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) - - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - -### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - -## 5.4 Secrets Management -### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) - - -**Result:** warn - -**Remediation:** -If possible, rewrite application code to read Secrets from mounted secret files, rather than -from environment variables. - -### 5.4.2 Consider external secret storage (Manual) - - -**Result:** warn - -**Remediation:** -Refer to the Secrets management options offered by your cloud provider or a third-party -secrets management solution. - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) - - -**Result:** warn - -**Remediation:** -Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. -An example is as below: - securityContext: - seccompProfile: - type: RuntimeDefault - -### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) - - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a -suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker -Containers. - -### 5.7.4 The default namespace should not be used (Manual) - - -**Result:** Not Applicable - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. +--- +title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27 +--- + + + + + + + +This document is a companion to the [RKE Hardening Guide](rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark. + + +This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes: + +| Rancher Version | CIS Benchmark Version | Kubernetes Version | +|-----------------|-----------------------|--------------------| +| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 | + +This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`. + +This document is for Rancher operators, security teams, auditors and decision makers. + +For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). + +## Testing Methodology + +Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. + +Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. + +:::note + +This guide only covers `automated` (previously called `scored`) tests. + +::: + +### Controls + +## 1.1 Control Plane Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the +control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +All configuration is passed in as arguments at container run time. + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a +``` + +**Expected Result**: + +```console +'permissions' is present +``` + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root + +**Audit:** + +```bash +ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit:** + +```bash +stat -c %a /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console +700 +``` + +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) + + +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the command 'ps -ef | grep etcd'. +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd + +**Audit:** + +```bash +stat -c %U:%G /node/var/lib/etcd +``` + +**Expected Result**: + +```console +'etcd:etcd' is present +``` + +**Returned Value**: + +```console +etcd:etcd +``` + +### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chmod 600 /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, chown root:root /etc/kubernetes/admin.conf +Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. + +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root scheduler +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +All configuration is passed in as arguments at container run time. + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chmod 600 controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown root:root controllermanager +Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +All configuration is passed in as arguments at container run time. + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +chown -R root:root /etc/kubernetes/pki/ + +**Audit Script:** `check_files_owner_in_dir.sh` + +```bash +#!/usr/bin/env bash + +# This script is used to ensure the owner is set to root:root for +# the given directory and all the files in it +# +# inputs: +# $1 = /full/path/to/directory +# +# outputs: +# true/false + +INPUT_DIR=$1 + +if [[ "${INPUT_DIR}" == "" ]]; then + echo "false" + exit +fi + +if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then + echo "false" + exit +fi + +statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) +while read -r statInfoLine; do + f=$(echo ${statInfoLine} | cut -d' ' -f1) + p=$(echo ${statInfoLine} | cut -d' ' -f2) + + if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then + if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "root:root" ]]; then + echo "false" + exit + fi + fi +done <<< "${statInfoLines}" + + +echo "true" +exit + +``` + +**Audit Execution:** + +```bash +./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual) + + +**Result:** warn + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the control plane node. +For example, +find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} + + +**Audit:** + +```bash +find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and remove the `DenyServiceExternalIPs` +from enabled admission plugins. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= +When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'Node' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --authorization-mode parameter to a value that includes RBAC, +for example `--authorization-mode=Node,RBAC`. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--authorization-mode' has 'RBAC' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'EventRateLimit' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'AlwaysPullImages' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--enable-admission-plugins' has 'NodeRestriction' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--secure-port' is greater than 0 OR '--secure-port' is not present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.17 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.18 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example, +--audit-log-path=/var/log/apiserver/audit.log + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-path' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxage parameter to 30 +or as an appropriate number of days, for example, +--audit-log-maxage=30 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxage' is greater or equal to 30 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. For example, +--audit-log-maxbackup=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxbackup' is greater or equal to 10 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB, --audit-log-maxsize=100 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-log-maxsize' is greater or equal to 100 +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, --request-timeout=300s + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --service-account-key-file parameter +to the public key file for service accounts. For example, +--service-account-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the client certificate authority file. +--client-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the etcd certificate authority file parameter. +--etcd-cafile= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--etcd-cafile' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the --encryption-provider-config parameter to the path of that file. +For example, --encryption-provider-config= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--encryption-provider-config' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 1.2.30 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. + +**Audit:** + +```bash +ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi +``` + +**Expected Result**: + +```console +'provider' is present +``` + +### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) + + +**Result:** pass + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the control plane node and set the below parameter. +--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, +TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, +TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, +TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, +TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA, +TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384' +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example, --terminated-pod-gc-threshold=10 + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node to set the below parameter. +--use-service-account-credentials=true + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--use-service-account-credentials' is not equal to 'false' +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--service-account-private-key-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--root-ca-file' is present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true +Cluster provisioned by RKE handles certificate rotation directly through RKE. + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the control plane node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--profiling' is equal to 'false' +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the control plane node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +**Returned Value**: + +```console +root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true +``` + +## 2 Etcd Node Configuration +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file= +--key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--cert-file' is present AND '--key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameters. +--peer-client-file= +--peer-key-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--peer-client-cert-auth' is equal to 'true' +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present +``` + +**Returned Value**: + +```console +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/ +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the +master node and set the below parameter. +--trusted-ca-file= + +**Audit:** + +```bash +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected Result**: + +```console +'--trusted-ca-file' is present +``` + +**Returned Value**: + +```console +etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +### 3.1.2 Service account token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of service account tokens. + +### 3.1.3 Bootstrap token authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented +in place of bootstrap tokens. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Automated) + + +**Result:** pass + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected Result**: + +```console +'--audit-policy-file' is present +``` + +**Returned Value**: + +```console +root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Review the audit policy provided for the cluster and ensure that it covers +at least the following areas, +- Access to Secrets managed by the cluster. Care should be taken to only + log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in + order to avoid risk of logging sensitive data. +- Modification of Pod and Deployment objects. +- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`. +For most requests, minimally logging at the Metadata level is recommended +(the most basic level of logging). + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +All configuration is passed in as arguments at container run time. + +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. + All configuration is passed in as arguments at container run time. + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +permissions has permissions 600, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=600 +``` + +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml + +**Audit:** + +```bash +/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected Result**: + +```console +'root:root' is present +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated) + + +**Result:** fail + +**Remediation:** +Run the following command to modify the file permissions of the +--client-ca-file chmod 600 + +**Audit:** + +```bash +stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +permissions has permissions 644, expected 600 or more restrictive +``` + +**Returned Value**: + +```console +permissions=644 +``` + +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) + + +**Result:** pass + +**Remediation:** +Run the following command to modify the ownership of the --client-ca-file. +chown root:root + +**Audit:** + +```bash +stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chmod 600 /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml +Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. +All configuration is passed in as arguments at container run time. + +## 4.2 Kubelet +### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to +`false`. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +`--anonymous-auth=false` +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--anonymous-auth' is equal to 'false' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--authorization-mode' does not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--read-only-port' is equal to '0' OR '--read-only-port' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.7 Ensure that the --hostname-override argument is not set (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors + +### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--event-qps' is greater or equal to 0 OR '--event-qps' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `tlsCertFile` to the location +of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile` +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example, +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE. + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) + + +**Result:** pass + +**Remediation:** +If using a Kubelet config file, edit the file to set `TLSCipherSuites` to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +### 4.2.13 Ensure that a limit is set on pod PIDs (Manual) + + +**Result:** warn + +**Remediation:** +Decide on an appropriate level for this parameter and set it, +either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting. + +**Audit:** + +```bash +/bin/ps -fC kubelet +``` + +**Audit Config:** + +```bash +/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi' +``` + +**Expected Result**: + +```console +'--pod-max-pids' is present +``` + +**Returned Value**: + +```console +UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf +``` + +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn + +**Remediation:** +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] + +### 5.1.2 Minimize access to secrets (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove get, list and watch access to Secret objects in the cluster. + +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) + + +**Result:** warn + +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. + +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** pass + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value +automountServiceAccountToken: false + +**Audit Script:** `check_for_default_sa.sh` + +```bash +#!/bin/bash + +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + +count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) +if [[ ${count_sa} -gt 0 ]]; then + echo "false" + exit +fi + +for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") +do + for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') + do + read kind name <<<$(IFS=","; echo $result) + resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l) + if [[ ${resource_count} -gt 0 ]]; then + echo "false" + exit + fi + done +done + + +echo "true" + +``` + +**Audit Execution:** + +```bash +./check_for_default_sa.sh +``` + +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) + + +**Result:** warn + +**Remediation:** +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. + +### 5.1.7 Avoid use of system:masters group (Manual) + + +**Result:** warn + +**Remediation:** +Remove the system:masters group from all users in the cluster. + +### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove the impersonate, bind and escalate rights from subjects. + +### 5.1.9 Minimize access to create persistent volumes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to PersistentVolume objects in the cluster. + +### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the proxy sub-resource of node objects. + +### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the approval sub-resource of certificatesigningrequest objects. + +### 5.1.12 Minimize access to webhook configuration objects (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects + +### 5.1.13 Minimize access to the service account token creation (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove access to the token sub-resource of serviceaccount objects. + +## 5.2 Pod Security Standards +### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that either Pod Security Admission or an external policy control system is in place +for every namespace which contains user workloads. + +### 5.2.2 Minimize the admission of privileged containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of privileged containers. + +### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostPID` containers. + +### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostIPC` containers. + +### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of `hostNetwork` containers. + +### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `.spec.allowPrivilegeEscalation` set to `true`. + +### 5.2.7 Minimize the admission of root containers (Manual) + + +**Result:** warn + +**Remediation:** +Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot` +or `MustRunAs` with the range of UIDs not including 0, is set. + +### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with the `NET_RAW` capability. + +### 5.2.9 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn + +**Remediation:** +Ensure that `allowedCapabilities` is not present in policies for the cluster unless +it is set to an empty array. + +### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual) + + +**Result:** warn + +**Remediation:** +Review the use of capabilites in applications running on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. + +### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +### 5.2.12 Minimize the admission of HostPath volumes (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +### 5.2.13 Minimize the admission of containers which use HostPorts (Manual) + + +**Result:** warn + +**Remediation:** +Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual) + + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +## 5.4 Secrets Management +### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual) + + +**Result:** warn + +**Remediation:** +If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) + + +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual) + + +**Result:** warn + +**Remediation:** +Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** Not Applicable + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. +